WorldWideScience

Sample records for hologlobe cumulative earthquake

  1. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  2. Cumulative Poisson Distribution Program

    Science.gov (United States)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  3. Divergent Cumulative Cultural Evolution

    OpenAIRE

    Marriott, Chris; Chebib, Jobran

    2016-01-01

    Divergent cumulative cultural evolution occurs when the cultural evolutionary trajectory diverges from the biological evolutionary trajectory. We consider the conditions under which divergent cumulative cultural evolution can occur. We hypothesize that two conditions are necessary. First that genetic and cultural information are stored separately in the agent. Second cultural information must be transferred horizontally between agents of different generations. We implement a model with these ...

  4. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  5. Standardization of the cumulative absolute velocity

    International Nuclear Information System (INIS)

    O'Hara, T.F.; Jacobson, J.P.

    1991-12-01

    EPRI NP-5930, ''A Criterion for Determining Exceedance of the Operating Basis Earthquake,'' was published in July 1988. As defined in that report, the Operating Basis Earthquake (OBE) is exceeded when both a response spectrum parameter and a second damage parameter, referred to as the Cumulative Absolute Velocity (CAV), are exceeded. In the review process of the above report, it was noted that the calculation of CAV could be confounded by time history records of long duration containing low (nondamaging) acceleration. Therefore, it is necessary to standardize the method of calculating CAV to account for record length. This standardized methodology allows consistent comparisons between future CAV calculations and the adjusted CAV threshold value based upon applying the standardized methodology to the data set presented in EPRI NP-5930. The recommended method to standardize the CAV calculation is to window its calculation on a second-by-second basis for a given time history. If the absolute acceleration exceeds 0.025g at any time during each one second interval, the earthquake records used in EPRI NP-5930 have been reanalyzed and the adjusted threshold of damage for CAV was found to be 0.16g-set

  6. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  7. Cumulation of light nuclei

    International Nuclear Information System (INIS)

    Baldin, A.M.; Bondarev, V.K.; Golovanov, L.B.

    1977-01-01

    Limit fragmentation of light nuclei (deuterium, helium) bombarded with 8,6 GeV/c protons was investigated. Fragments (pions, protons and deuterons) were detected within the emission angle 50-150 deg with regard to primary protons and within the pulse range 150-180 MeV/c. By the kinematics of collision of a primary proton with a target at rest the fragments observed correspond to a target mass upto 3 GeV. Thus, the data obtained correspond to teh cumulation upto the third order

  8. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  9. CROSSER - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.

  10. Cumulative environmental effects. Summary

    International Nuclear Information System (INIS)

    2012-01-01

    This report presents a compilation of knowledge about the state of the environment and human activity in the Norwegian part of the North Sea and Skagerrak. The report gives an overview of pressures and impacts on the environment from normal activity and in the event of accidents. This is used to assess the cumulative environmental effects, which factors have most impact and where the impacts are greatest, and to indicate which problems are expected to be most serious in the future. The report is intended to provide relevant information that can be used in the management of the marine area in the future. It also provides input for the identification of environmental targets and management measures for the North Sea and Skagerrak.(Author)

  11. Cumulative environmental effects. Summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    This report presents a compilation of knowledge about the state of the environment and human activity in the Norwegian part of the North Sea and Skagerrak. The report gives an overview of pressures and impacts on the environment from normal activity and in the event of accidents. This is used to assess the cumulative environmental effects, which factors have most impact and where the impacts are greatest, and to indicate which problems are expected to be most serious in the future. The report is intended to provide relevant information that can be used in the management of the marine area in the future. It also provides input for the identification of environmental targets and management measures for the North Sea and Skagerrak.(Author)

  12. NEWTONP - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.

  13. Cumulative radiation effect

    International Nuclear Information System (INIS)

    Kirk, J.; Gray, W.M.; Watson, E.R.

    1977-01-01

    In five previous papers, the concept of Cumulative Radiation Effect (CRE) has been presented as a scale of accumulative sub-tolerance radiation damage, with a unique value of the CRE describing a specific level of radiation effect. Simple nomographic and tabular methods for the solution of practical problems in radiotherapy are now described. An essential feature of solving a CRE problem is firstly to present it in a concise and readily appreciated form, and, to do this, nomenclature has been introduced to describe schedules and regimes as compactly as possible. Simple algebraic equations have been derived to describe the CRE achieved by multi-schedule regimes. In these equations, the equivalence conditions existing at the junctions between schedules are not explicit and the equations are based on the CREs of the constituent schedules assessed individually without reference to their context in the regime as a whole. This independent evaluation of CREs for each schedule has resulted in a considerable simplification in the calculation of complex problems. The calculations are further simplified by the use of suitable tables and nomograms, so that the mathematics involved is reduced to simple arithmetical operations which require at the most the use of a slide rule but can be done by hand. The order of procedure in the presentation and calculation of CRE problems can be summarised in an evaluation procedure sheet. The resulting simple methods for solving practical problems of any complexity on the CRE-system are demonstrated by a number of examples. (author)

  14. Cumulative radiation effect

    International Nuclear Information System (INIS)

    Kirk, J.; Cain, O.; Gray, W.M.

    1977-01-01

    Cumulative Radiation Effect (CRE) represents a scale of accumulative sub-tolerance radiation damage, with a unique value of the CRE describing a specific level of radiation effect. Computer calculations have been used to simplify the evaluation of problems associated with the applications of the CRE-system in radiotherapy. In a general appraisal of the applications of computers to the CRE-system, the various problems encountered in clinical radiotherapy have been categorised into those involving the evaluation of a CRE at a point in tissue and those involving the calculation of CRE distributions. As a general guide, the computer techniques adopted at the Glasgow Institute of Radiotherapeutics for the solution of CRE problems are presented, and consist basically of a package of three interactive programs for point CRE calculations and a Fortran program which calculates CRE distributions for iso-effect treatment planning. Many examples are given to demonstrate the applications of these programs, and special emphasis has been laid on the problem of treating a point in tissue with different doses per fraction on alternate treatment days. The wide range of possible clinical applications of the CRE-system has been outlined and described under the categories of routine clinical applications, retrospective and prospective surveys of patient treatment, and experimental and theoretical research. Some of these applications such as the results of surveys and studies of time optimisation of treatment schedules could have far-reaching consequences and lead to significant improvements in treatment and cure rates with the minimum damage to normal tissue. (author)

  15. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  16. Estimation of Cumulative Absolute Velocity using Empirical Green's Function Method

    International Nuclear Information System (INIS)

    Park, Dong Hee; Yun, Kwan Hee; Chang, Chun Joong; Park, Se Moon

    2009-01-01

    In recognition of the needs to develop a new criterion for determining when the OBE (Operating Basis Earthquake) has been exceeded at nuclear power plants, Cumulative Absolute Velocity (CAV) was introduced by EPRI. The concept of CAV is the area accumulation with the values more than 0.025g occurred during every one second. The equation of the CAV is as follows. CAV = ∫ 0 max |a(t)|dt (1) t max = duration of record, a(t) = acceleration (>0.025g) Currently, the OBE exceedance criteria in Korea is Peak Ground Acceleration (PGA, PGA>0.1g). When Odesan earthquake (M L =4.8, January 20th, 2007) and Gyeongju earthquake (M L =3.4, June 2nd, 1999) were occurred, we have had already experiences of PGA greater than 0.1g that did not even cause any damage to the poorly-designed structures nearby. This moderate earthquake has motivated Korea to begin the use of the CAV for OBE exceedance criteria for NPPs. Because the present OBE level has proved itself to be a poor indicator for small-to-moderate earthquakes, for which the low OBE level can cause an inappropriate shut down the plant. A more serious possibility is that this scenario will become a reality at a very high level. Empirical Green's Function method was a simulation technique which can estimate the CAV value and it is hereby introduced

  17. Secant cumulants and toric geometry

    NARCIS (Netherlands)

    Michalek, M.; Oeding, L.; Zwiernik, P.W.

    2012-01-01

    We study the secant line variety of the Segre product of projective spaces using special cumulant coordinates adapted for secant varieties. We show that the secant variety is covered by open normal toric varieties. We prove that in cumulant coordinates its ideal is generated by binomial quadrics. We

  18. Seismicity map tools for earthquake studies

    Science.gov (United States)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  19. Earthquake Facts

    Science.gov (United States)

    ... North Dakota, and Wisconsin. The core of the earth was the first internal structural element to be identified. In 1906 R.D. Oldham discovered it from his studies of earthquake records. The inner core is solid, and the outer core is liquid and so does not transmit ...

  20. Understanding Earthquakes

    Science.gov (United States)

    Davis, Amanda; Gray, Ron

    2018-01-01

    December 26, 2004 was one of the deadliest days in modern history, when a 9.3 magnitude earthquake--the third largest ever recorded--struck off the coast of Sumatra in Indonesia (National Centers for Environmental Information 2014). The massive quake lasted at least 10 minutes and devastated the Indian Ocean. The quake displaced an estimated…

  1. The challenge of cumulative impacts

    Energy Technology Data Exchange (ETDEWEB)

    Masden, Elisabeth

    2011-07-01

    Full text: As governments pledge to combat climate change, wind turbines are becoming a common feature of terrestrial and marine environments. Although wind power is a renewable energy source and a means of reducing carbon emissions, there is a need to ensure that the wind farms themselves do not damage the environment. There is particular concern over the impacts of wind farms on bird populations, and with increasing numbers of wind farm proposals, the concern focuses on cumulative impacts. Individually, a wind farm, or indeed any activity/action, may have minor effects on the environment, but collectively these may be significant, potentially greater than the sum of the individual parts acting alone. Cumulative impact assessment is a legislative requirement of environmental impact assessment but such assessments are rarely adequate restricting the acquisition of basic knowledge about the cumulative impacts of wind farms on bird populations. Reasons for this are numerous but a recurring theme is the lack of clear definitions and guidance on how to perform cumulative assessments. Here we present a conceptual framework and include illustrative examples to demonstrate how the framework can be used to improve the planning and execution of cumulative impact assessments. The core concept is that explicit definitions of impacts, actions and scales of assessment are required to reduce uncertainty in the process of assessment and improve communication between stake holders. Only when it is clear what has been included within a cumulative assessment, is it possible to make comparisons between developments. Our framework requires improved legislative guidance on the actions to include in assessments, and advice on the appropriate baselines against which to assess impacts. Cumulative impacts are currently considered on restricted scales (spatial and temporal) relating to individual development assessments. We propose that benefits would be gained from elevating cumulative

  2. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  3. Defeating Earthquakes

    Science.gov (United States)

    Stein, R. S.

    2012-12-01

    The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by

  4. Earthquake Early Warning Systems

    OpenAIRE

    Pei-Yang Lin

    2011-01-01

    Because of Taiwan’s unique geographical environment, earthquake disasters occur frequently in Taiwan. The Central Weather Bureau collated earthquake data from between 1901 and 2006 (Central Weather Bureau, 2007) and found that 97 earthquakes had occurred, of which, 52 resulted in casualties. The 921 Chichi Earthquake had the most profound impact. Because earthquakes have instant destructive power and current scientific technologies cannot provide precise early warnings in advance, earthquake ...

  5. Cyclic characteristics of earthquake time histories

    International Nuclear Information System (INIS)

    Hall, J.R. Jr; Shukla, D.K.; Kissenpfennig, J.F.

    1977-01-01

    From an engineering standpoint, an earthquake record may be characterized by a number of parameters, one of which is its 'cyclic characteristics'. The cyclic characteristics are most significant in fatigue analysis of structures and liquefaction analysis of soils where, in addition to the peak motion, cyclic buildup is significant. Whereas duration peak amplitude and response spectra for earthquakes have been studied extensively, the cyclic characteristics of earthquake records have not received an equivalent attention. Present procedures to define the cyclic characteristics are generally based upon counting the number of peaks at various amplitude ranges on a record. This paper presents a computer approach which describes a time history by an amplitude envelope and a phase curve. Using Fast Fourier Transform Techniques, an earthquake time history is represented as a projection along the x-axis of a rotating vector-the length the vector is given by the amplitude spectra-and the angle between the vector and x-axis is given by the phase curve. Thus one cycle is completed when the vector makes a full rotation. Based upon Miner's cumulative damage concept, the computer code automatically combines the cycles of various amplitudes to obtain the equivalent number of cycles of a given amplitude. To illustrate the overall results, the cyclic characteristics of several real and synthetic earthquake time histories have been studied and are presented in the paper, with the conclusion that this procedure provides a physical interpretation of the cyclic characteristics of earthquakes. (Auth.)

  6. Cumulative risk, cumulative outcome: a 20-year longitudinal study.

    Directory of Open Access Journals (Sweden)

    Leslie Atkinson

    Full Text Available Cumulative risk (CR models provide some of the most robust findings in the developmental literature, predicting numerous and varied outcomes. Typically, however, these outcomes are predicted one at a time, across different samples, using concurrent designs, longitudinal designs of short duration, or retrospective designs. We predicted that a single CR index, applied within a single sample, would prospectively predict diverse outcomes, i.e., depression, intelligence, school dropout, arrest, smoking, and physical disease from childhood to adulthood. Further, we predicted that number of risk factors would predict number of adverse outcomes (cumulative outcome; CO. We also predicted that early CR (assessed at age 5/6 explains variance in CO above and beyond that explained by subsequent risk (assessed at ages 12/13 and 19/20. The sample consisted of 284 individuals, 48% of whom were diagnosed with a speech/language disorder. Cumulative risk, assessed at 5/6-, 12/13-, and 19/20-years-old, predicted aforementioned outcomes at age 25/26 in every instance. Furthermore, number of risk factors was positively associated with number of negative outcomes. Finally, early risk accounted for variance beyond that explained by later risk in the prediction of CO. We discuss these findings in terms of five criteria posed by these data, positing a "mediated net of adversity" model, suggesting that CR may increase some central integrative factor, simultaneously augmenting risk across cognitive, quality of life, psychiatric and physical health outcomes.

  7. Earthquake geology of the Bulnay Fault (Mongolia)

    Science.gov (United States)

    Rizza, Magali; Ritz, Jean-Franciois; Prentice, Carol S.; Vassallo, Ricardo; Braucher, Regis; Larroque, Christophe; Arzhannikova, A.; Arzhanikov, S.; Mahan, Shannon; Massault, M.; Michelot, J-L.; Todbileg, M.

    2015-01-01

    The Bulnay earthquake of July 23, 1905 (Mw 8.3-8.5), in north-central Mongolia, is one of the world's largest recorded intracontinental earthquakes and one of four great earthquakes that occurred in the region during the 20th century. The 375-km-long surface rupture of the left-lateral, strike-slip, N095°E trending Bulnay Fault associated with this earthquake is remarkable for its pronounced expression across the landscape and for the size of features produced by previous earthquakes. Our field observations suggest that in many areas the width and geometry of the rupture zone is the result of repeated earthquakes; however, in those areas where it is possible to determine that the geomorphic features are the result of the 1905 surface rupture alone, the size of the features produced by this single earthquake are singular in comparison to most other historical strike-slip surface ruptures worldwide. Along the 80 km stretch, between 97.18°E and 98.33°E, the fault zone is characterized by several meters width and the mean left-lateral 1905 offset is 8.9 ± 0.6 m with two measured cumulative offsets that are twice the 1905 slip. These observations suggest that the displacement produced during the penultimate event was similar to the 1905 slip. Morphotectonic analyses carried out at three sites along the eastern part of the Bulnay fault, allow us to estimate a mean horizontal slip rate of 3.1 ± 1.7 mm/yr over the Late Pleistocene-Holocene period. In parallel, paleoseismological investigations show evidence for two earthquakes prior to the 1905 event with recurrence intervals of ~2700-4000 years.

  8. The Algebra of the Cumulative Percent Operation.

    Science.gov (United States)

    Berry, Andrew J.

    2002-01-01

    Discusses how to help students avoid some pervasive reasoning errors in solving cumulative percent problems. Discusses the meaning of ."%+b%." the additive inverse of ."%." and other useful applications. Emphasizes the operational aspect of the cumulative percent concept. (KHR)

  9. Adaptive strategies for cumulative cultural learning.

    Science.gov (United States)

    Ehn, Micael; Laland, Kevin

    2012-05-21

    The demographic and ecological success of our species is frequently attributed to our capacity for cumulative culture. However, it is not yet known how humans combine social and asocial learning to generate effective strategies for learning in a cumulative cultural context. Here we explore how cumulative culture influences the relative merits of various pure and conditional learning strategies, including pure asocial and social learning, critical social learning, conditional social learning and individual refiner strategies. We replicate the Rogers' paradox in the cumulative setting. However, our analysis suggests that strategies that resolved Rogers' paradox in a non-cumulative setting may not necessarily evolve in a cumulative setting, thus different strategies will optimize cumulative and non-cumulative cultural learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. 32 CFR 651.16 - Cumulative impacts.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Cumulative impacts. 651.16 Section 651.16... § 651.16 Cumulative impacts. (a) NEPA analyses must assess cumulative effects, which are the impact on the environment resulting from the incremental impact of the action when added to other past, present...

  11. A paradox of cumulative culture.

    Science.gov (United States)

    Kobayashi, Yutaka; Wakano, Joe Yuichiro; Ohtsuki, Hisashi

    2015-08-21

    Culture can grow cumulatively if socially learnt behaviors are improved by individual learning before being passed on to the next generation. Previous authors showed that this kind of learning strategy is unlikely to be evolutionarily stable in the presence of a trade-off between learning and reproduction. This is because culture is a public good that is freely exploited by any member of the population in their model (cultural social dilemma). In this paper, we investigate the effect of vertical transmission (transmission from parents to offspring), which decreases the publicness of culture, on the evolution of cumulative culture in both infinite and finite population models. In the infinite population model, we confirm that culture accumulates largely as long as transmission is purely vertical. It turns out, however, that introduction of even slight oblique transmission drastically reduces the equilibrium level of culture. Even more surprisingly, if the population size is finite, culture hardly accumulates even under purely vertical transmission. This occurs because stochastic extinction due to random genetic drift prevents a learning strategy from accumulating enough culture. Overall, our theoretical results suggest that introducing vertical transmission alone does not really help solve the cultural social dilemma problem. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Earthquakes: hydrogeochemical precursors

    Science.gov (United States)

    Ingebritsen, Steven E.; Manga, Michael

    2014-01-01

    Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

  13. Ground water and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ts' ai, T H

    1977-11-01

    Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.

  14. Ionospheric earthquake precursors

    International Nuclear Information System (INIS)

    Bulachenko, A.L.; Oraevskij, V.N.; Pokhotelov, O.A.; Sorokin, V.N.; Strakhov, V.N.; Chmyrev, V.M.

    1996-01-01

    Results of experimental study on ionospheric earthquake precursors, program development on processes in the earthquake focus and physical mechanisms of formation of various type precursors are considered. Composition of experimental cosmic system for earthquake precursors monitoring is determined. 36 refs., 5 figs

  15. Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  16. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  17. Scaling and spatial complementarity of tectonic earthquake swarms

    KAUST Repository

    Passarelli, Luigi

    2017-11-10

    Tectonic earthquake swarms (TES) often coincide with aseismic slip and sometimes precede damaging earthquakes. In spite of recent progress in understanding the significance and properties of TES at plate boundaries, their mechanics and scaling are still largely uncertain. Here we evaluate several TES that occurred during the past 20 years on a transform plate boundary in North Iceland. We show that the swarms complement each other spatially with later swarms discouraged from fault segments activated by earlier swarms, which suggests efficient strain release and aseismic slip. The fault area illuminated by earthquakes during swarms may be more representative of the total moment release than the cumulative moment of the swarm earthquakes. We use these findings and other published results from a variety of tectonic settings to discuss general scaling properties for TES. The results indicate that the importance of TES in releasing tectonic strain at plate boundaries may have been underestimated.

  18. Earthquake forecasting and warning

    Energy Technology Data Exchange (ETDEWEB)

    Rikitake, T.

    1983-01-01

    This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.

  19. Cumulative trauma disorders: A review.

    Science.gov (United States)

    Iqbal, Zaheen A; Alghadir, Ahmad H

    2017-08-03

    Cumulative trauma disorder (CTD) is a term for various injuries of the musculoskeletal and nervous systems that are caused by repetitive tasks, forceful exertions, vibrations, mechanical compression or sustained postures. Although there are many studies citing incidence of CTDs, there are fewer articles about its etiology, pathology and management. The aim of our study was to discuss the etiology, pathogenesis, prevention and management of CTDs. A literature search was performed using various electronic databases. The search was limited to articles in English language pertaining to randomized clinical trials, cohort studies and systematic reviews of CTDs. A total of 180 papers were identified to be relevant published since 1959. Out of these, 125 papers reported about its incidence and 50 about its conservative treatment. Workplace environment, same task repeatability and little variability, decreased time for rest, increase in expectations are major factors for developing CTDs. Prevention of its etiology and early diagnosis can be the best to decrease its incidence and severity. For effective management of CTDs, its treatment should be divided into Primordial, Primary, Secondary and Tertiary prevention.

  20. Complete cumulative index (1963-1983)

    International Nuclear Information System (INIS)

    1983-01-01

    This complete cumulative index covers all regular and special issues and supplements published by Atomic Energy Review (AER) during its lifetime (1963-1983). The complete cumulative index consists of six Indexes: the Index of Abstracts, the Subject Index, the Title Index, the Author Index, the Country Index and the Table of Elements Index. The complete cumulative index supersedes the Cumulative Indexes for Volumes 1-7: 1963-1969 (1970), and for Volumes 1-10: 1963-1972 (1972); this Index also finalizes Atomic Energy Review, the publication of which has recently been terminated by the IAEA

  1. Statistical characteristics of seismo-ionospheric GPS TEC disturbances prior to global Mw ≥ 5.0 earthquakes (1998-2014)

    Science.gov (United States)

    Shah, Munawar; Jin, Shuanggen

    2015-12-01

    Pre-earthquake ionospheric anomalies are still challenging and unclear to obtain and understand, particularly for different earthquake magnitudes and focal depths as well as types of fault. In this paper, the seismo-ionospheric disturbances (SID) related to global earthquakes with 1492 Mw ≥ 5.0 from 1998 to 2014 are investigated using the total electron content (TEC) of GPS global ionosphere maps (GIM). Statistical analysis of 10-day TEC data before global Mw ≥ 5.0 earthquakes shows significant enhancement 5 days before an earthquake of Mw ≥ 6.0 at a 95% confidence level. Earthquakes with a focal depth of less than 60 km and Mw ≥ 6.0 are presumably the root of deviation in the ionospheric TEC because earthquake breeding zones have gigantic quantities of energy at shallower focal depths. Increased anomalous TEC is recorded in cumulative percentages beyond Mw = 5.5. Sharpness in cumulative percentages is evident in seismo-ionospheric disturbance prior to Mw ≥ 6.0 earthquakes. Seismo-ionospheric disturbances related to strike slip and thrust earthquakes are noticeable for magnitude Mw6.0-7.0 earthquakes. The relative values reveal high ratios (up to 2) and low ratios (up to -0.5) within 5 days prior to global earthquakes for positive and negative anomalies. The anomalous patterns in TEC related to earthquakes are possibly due to the coupling of high amounts of energy from earthquake breeding zones of higher magnitude and shallower focal depth.

  2. Slip in the 1857 and earlier large earthquakes along the Carrizo Plain, San Andreas Fault.

    Science.gov (United States)

    Zielke, Olaf; Arrowsmith, J Ramón; Grant Ludwig, Lisa; Akçiz, Sinan O

    2010-02-26

    The moment magnitude (Mw) 7.9 Fort Tejon earthquake of 1857, with a approximately 350-kilometer-long surface rupture, was the most recent major earthquake along the south-central San Andreas Fault, California. Based on previous measurements of its surface slip distribution, rupture along the approximately 60-kilometer-long Carrizo segment was thought to control the recurrence of 1857-like earthquakes. New high-resolution topographic data show that the average slip along the Carrizo segment during the 1857 event was 5.3 +/- 1.4 meters, eliminating the core assumption for a linkage between Carrizo segment rupture and recurrence of major earthquakes along the south-central San Andreas Fault. Earthquake slip along the Carrizo segment may recur in earthquake clusters with cumulative slip of approximately 5 meters.

  3. Encyclopedia of earthquake engineering

    CERN Document Server

    Kougioumtzoglou, Ioannis; Patelli, Edoardo; Au, Siu-Kui

    2015-01-01

    The Encyclopedia of Earthquake Engineering is designed to be the authoritative and comprehensive reference covering all major aspects of the science of earthquake engineering, specifically focusing on the interaction between earthquakes and infrastructure. The encyclopedia comprises approximately 265 contributions. Since earthquake engineering deals with the interaction between earthquake disturbances and the built infrastructure, the emphasis is on basic design processes important to both non-specialists and engineers so that readers become suitably well-informed without needing to deal with the details of specialist understanding. The content of this encyclopedia provides technically inclined and informed readers about the ways in which earthquakes can affect our infrastructure and how engineers would go about designing against, mitigating and remediating these effects. The coverage ranges from buildings, foundations, underground construction, lifelines and bridges, roads, embankments and slopes. The encycl...

  4. System-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  5. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  6. Cumulative human impacts on marine predators

    DEFF Research Database (Denmark)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact...

  7. Cumulative Student Loan Debt in Minnesota, 2015

    Science.gov (United States)

    Williams-Wyche, Shaun

    2016-01-01

    To better understand student debt in Minnesota, the Minnesota Office of Higher Education (the Office) gathers information on cumulative student loan debt from Minnesota degree-granting institutions. These data detail the number of students with loans by institution, the cumulative student loan debt incurred at that institution, and the percentage…

  8. Earthquake at 40 feet

    Science.gov (United States)

    Miller, G. J.

    1976-01-01

    The earthquake that struck the island of Guam on November 1, 1975, at 11:17 a.m had many unique aspects-not the least of which was the experience of an earthquake of 6.25 Richter magnitude while at 40 feet. My wife Bonnie, a fellow diver, Greg Guzman, and I were diving at Gabgab Beach in teh outer harbor of Apra Harbor, engaged in underwater phoyography when the earthquake struck. 

  9. Earthquakes and economic growth

    OpenAIRE

    Fisker, Peter Simonsen

    2012-01-01

    This study explores the economic consequences of earthquakes. In particular, it is investigated how exposure to earthquakes affects economic growth both across and within countries. The key result of the empirical analysis is that while there are no observable effects at the country level, earthquake exposure significantly decreases 5-year economic growth at the local level. Areas at lower stages of economic development suffer harder in terms of economic growth than richer areas. In addition,...

  10. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  11. Earthquakes and Schools

    Science.gov (United States)

    National Clearinghouse for Educational Facilities, 2008

    2008-01-01

    Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…

  12. Bam Earthquake in Iran

    CERN Multimedia

    2004-01-01

    Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN

  13. Tradable Earthquake Certificates

    NARCIS (Netherlands)

    Woerdman, Edwin; Dulleman, Minne

    2018-01-01

    This article presents a market-based idea to compensate for earthquake damage caused by the extraction of natural gas and applies it to the case of Groningen in the Netherlands. Earthquake certificates give homeowners a right to yearly compensation for both property damage and degradation of living

  14. Historic Eastern Canadian earthquakes

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchinson, R.J.

    1981-01-01

    Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design

  15. Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake

    Science.gov (United States)

    Durukal, E.; Sesetyan, K.; Erdik, M.

    2009-04-01

    The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing

  16. The Relationship between Gender, Cumulative Adversities and ...

    African Journals Online (AJOL)

    The Relationship between Gender, Cumulative Adversities and Mental Health of Employees in ... CAs were measured in three forms (family adversities (CAFam), personal adversities ... Age of employees ranged between 18-65 years.

  17. Cumulative cultural learning: Development and diversity

    Science.gov (United States)

    2017-01-01

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children’s learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission—the cornerstone of human cultural diversity. PMID:28739945

  18. Complexity and demographic explanations of cumulative culture

    NARCIS (Netherlands)

    Querbes, A.; Vaesen, K.; Houkes, W.N.

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological

  19. Cumulative human impacts on marine predators.

    Science.gov (United States)

    Maxwell, Sara M; Hazen, Elliott L; Bograd, Steven J; Halpern, Benjamin S; Breed, Greg A; Nickel, Barry; Teutschel, Nicole M; Crowder, Larry B; Benson, Scott; Dutton, Peter H; Bailey, Helen; Kappes, Michelle A; Kuhn, Carey E; Weise, Michael J; Mate, Bruce; Shaffer, Scott A; Hassrick, Jason L; Henry, Robert W; Irvine, Ladd; McDonald, Birgitte I; Robinson, Patrick W; Block, Barbara A; Costa, Daniel P

    2013-01-01

    Stressors associated with human activities interact in complex ways to affect marine ecosystems, yet we lack spatially explicit assessments of cumulative impacts on ecologically and economically key components such as marine predators. Here we develop a metric of cumulative utilization and impact (CUI) on marine predators by combining electronic tracking data of eight protected predator species (n=685 individuals) in the California Current Ecosystem with data on 24 anthropogenic stressors. We show significant variation in CUI with some of the highest impacts within US National Marine Sanctuaries. High variation in underlying species and cumulative impact distributions means that neither alone is sufficient for effective spatial management. Instead, comprehensive management approaches accounting for both cumulative human impacts and trade-offs among multiple stressors must be applied in planning the use of marine resources.

  20. Cumulative cultural learning: Development and diversity.

    Science.gov (United States)

    Legare, Cristine H

    2017-07-24

    The complexity and variability of human culture is unmatched by any other species. Humans live in culturally constructed niches filled with artifacts, skills, beliefs, and practices that have been inherited, accumulated, and modified over generations. A causal account of the complexity of human culture must explain its distinguishing characteristics: It is cumulative and highly variable within and across populations. I propose that the psychological adaptations supporting cumulative cultural transmission are universal but are sufficiently flexible to support the acquisition of highly variable behavioral repertoires. This paper describes variation in the transmission practices (teaching) and acquisition strategies (imitation) that support cumulative cultural learning in childhood. Examining flexibility and variation in caregiver socialization and children's learning extends our understanding of evolution in living systems by providing insight into the psychological foundations of cumulative cultural transmission-the cornerstone of human cultural diversity.

  1. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  2. About the cumulants of periodic signals

    Science.gov (United States)

    Barrau, Axel; El Badaoui, Mohammed

    2018-01-01

    This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.

  3. Cumulative effects assessment: Does scale matter?

    International Nuclear Information System (INIS)

    Therivel, Riki; Ross, Bill

    2007-01-01

    Cumulative effects assessment (CEA) is (or should be) an integral part of environmental assessment at both the project and the more strategic level. CEA helps to link the different scales of environmental assessment in that it focuses on how a given receptor is affected by the totality of plans, projects and activities, rather than on the effects of a particular plan or project. This article reviews how CEAs consider, and could consider, scale issues: spatial extent, level of detail, and temporal issues. It is based on an analysis of Canadian project-level CEAs and UK strategic-level CEAs. Based on a review of literature and, especially, case studies with which the authors are familiar, it concludes that scale issues are poorly considered at both levels, with particular problems being unclear or non-existing cumulative effects scoping methodologies; poor consideration of past or likely future human activities beyond the plan or project in question; attempts to apportion 'blame' for cumulative effects; and, at the plan level, limited management of cumulative effects caused particularly by the absence of consent regimes. Scale issues are important in most of these problems. However both strategic-level and project-level CEA have much potential for managing cumulative effects through better siting and phasing of development, demand reduction and other behavioural changes, and particularly through setting development consent rules for projects. The lack of strategic resource-based thresholds constrains the robust management of strategic-level cumulative effects

  4. Earthquakes, November-December 1977

    Science.gov (United States)

    Person, W.J.

    1978-01-01

    Two major earthquakes occurred in the last 2 months of the year. A magnitude 7.0 earthquake struck San Juan Province, Argentina, on November 23, causing fatalities and damage. The second major earthquake was a magnitude 7.0 in the Bonin Islands region, an unpopulated area. On December 19, Iran experienced a destructive earthquake, which killed over 500.

  5. Earthquakes, September-October 1986

    Science.gov (United States)

    Person, W.J.

    1987-01-01

    There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.

  6. Surface slip during large Owens Valley earthquakes

    KAUST Repository

    Haddon, E. K.; Amos, C. B.; Zielke, Olaf; Jayko, A. S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from approximate to 1.0 to 6.0 m and average 3.31.1 m (2 sigma). Vertical offsets are predominantly east-down between approximate to 0.1 and 2.4 m, with a mean of 0.80.5 m. The average lateral-to-vertical ratio compiled at specific sites is approximate to 6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.41.5 m, corresponding to a geologic M-w approximate to 7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.12.0 m, 12.8 +/- 1.5 m, and 16.6 +/- 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between approximate to 0.6 and 1.6 mm/yr (1 sigma) over the late Quaternary.

  7. Surface slip during large Owens Valley earthquakes

    KAUST Repository

    Haddon, E. K.

    2016-01-10

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from approximate to 1.0 to 6.0 m and average 3.31.1 m (2 sigma). Vertical offsets are predominantly east-down between approximate to 0.1 and 2.4 m, with a mean of 0.80.5 m. The average lateral-to-vertical ratio compiled at specific sites is approximate to 6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.41.5 m, corresponding to a geologic M-w approximate to 7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.12.0 m, 12.8 +/- 1.5 m, and 16.6 +/- 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between approximate to 0.6 and 1.6 mm/yr (1 sigma) over the late Quaternary.

  8. Earthquake hazard assessment and small earthquakes

    International Nuclear Information System (INIS)

    Reiter, L.

    1987-01-01

    The significance of small earthquakes and their treatment in nuclear power plant seismic hazard assessment is an issue which has received increased attention over the past few years. In probabilistic studies, sensitivity studies showed that the choice of the lower bound magnitude used in hazard calculations can have a larger than expected effect on the calculated hazard. Of particular interest is the fact that some of the difference in seismic hazard calculations between the Lawrence Livermore National Laboratory (LLNL) and Electric Power Research Institute (EPRI) studies can be attributed to this choice. The LLNL study assumed a lower bound magnitude of 3.75 while the EPRI study assumed a lower bound magnitude of 5.0. The magnitudes used were assumed to be body wave magnitudes or their equivalents. In deterministic studies recent ground motion recordings of small to moderate earthquakes at or near nuclear power plants have shown that the high frequencies of design response spectra may be exceeded. These exceedances became important issues in the licensing of the Summer and Perry nuclear power plants. At various times in the past particular concerns have been raised with respect to the hazard and damage potential of small to moderate earthquakes occurring at very shallow depths. In this paper a closer look is taken at these issues. Emphasis is given to the impact of lower bound magnitude on probabilistic hazard calculations and the historical record of damage from small to moderate earthquakes. Limited recommendations are made as to how these issues should be viewed

  9. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  10. Sun, Moon and Earthquakes

    Science.gov (United States)

    Kolvankar, V. G.

    2013-12-01

    During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

  11. Mental health problems among survivors in hard-hit areas of the 5.12 Wenchuan and 4.20 Lushan earthquakes.

    Science.gov (United States)

    Xie, Zongtang; Xu, Jiuping; Wu, Zhibin

    2017-02-01

    Earthquake exposure has often been associated with psychological distress. However, little is known about the cumulative effect of exposure to two earthquakes on psychological distress and in particular, the effect on the development of post-traumatic stress disorder (PTSD), anxiety and depression disorders. This study explored the effect of exposure on mental health outcomes after a first earthquake and again after a second earthquake. A population-based mental health survey using self-report questionnaires was conducted on 278 people in the hard-hit areas of Lushan and Baoxing Counties 13-16 months after the Wenchuan earthquake (Sample 1). 191 of these respondents were evaluated again 8-9 months after the Lushan earthquake (Sample 2), which struck almost 5 years after the Wenchuan earthquake. In Sample 1, the prevalence rates for PTSD, anxiety and depression disorders were 44.53, 54.25 and 51.82%, respectively, and in Sample 2 the corresponding rates were 27.27, 38.63 and 36.93%. Females, the middle-aged, those of Tibetan nationality, and people who reported fear during the earthquake were at an increased risk of experiencing post-traumatic symptoms. Although the incidence of PTSD, anxiety and depression disorders decreased from Sample 1 to Sample 2, the cumulative effect of exposure to two earthquakes on mental health problems was serious in the hard-hit areas. Therefore, it is important that psychological counseling be provided for earthquake victims, and especially those exposed to multiple earthquakes.

  12. Repose time and cumulative moment magnitude: A new tool for forecasting eruptions?

    Science.gov (United States)

    Thelen, W. A.; Malone, S. D.; West, M. E.

    2010-09-01

    During earthquake swarms on active volcanoes, one of the primary challenges facing scientists is determining the likelihood of an eruption. Here we present the relation between repose time and the cumulative moment magnitude (CMM) as a tool to aid in differentiating between an eruption and a period of unrest. In several case studies, the CMM is lower at shorter repose times than it is at longer repose times. The relationship between repose time and CMM may be linear in log-log space, particularly at Mount St. Helens. We suggest that the volume and competence of the plug within the conduit drives the strength of the precursory CMM.

  13. Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2005-01-01

    cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...

  14. Predicting Cumulative Incidence Probability by Direct Binomial Regression

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...

  15. Managing cumulative impacts: A key to sustainability?

    Energy Technology Data Exchange (ETDEWEB)

    Hunsaker, C.T.

    1994-12-31

    This paper addresses how science can be more effectively used in creating policy to manage cumulative effects on ecosystems. The paper focuses on the scientific techniques that we have to identify and to assess cumulative impacts on ecosystems. The term ``sustainable development`` was brought into common use by the World Commission on Environment and Development (The Brundtland Commission) in 1987. The Brundtland Commission report highlighted the need to simultaneously address developmental and environmental imperatives simultaneously by calling for development that ``meets the needs of the present generation without compromising the needs of future generations.`` We cannot claim to be working toward sustainable development until we can quantitatively assess cumulative impacts on the environment: The two concepts are inextricibally linked in that the elusiveness of cumulative effects likely has the greatest potential of keeping us from achieving sustainability. In this paper, assessment and management frameworks relevant to cumulative impacts are discussed along with recent literature on how to improve such assessments. When possible, examples are given for marine ecosystems.

  16. Earthquake Ground Motion Selection

    Science.gov (United States)

    2012-05-01

    Nonlinear analyses of soils, structures, and soil-structure systems offer the potential for more accurate characterization of geotechnical and structural response under strong earthquake shaking. The increasing use of advanced performance-based desig...

  17. 1988 Spitak Earthquake Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...

  18. The Implications of Strike-Slip Earthquake Source Properties on the Transform Boundary Development Process

    Science.gov (United States)

    Neely, J. S.; Huang, Y.; Furlong, K.

    2017-12-01

    Subduction-Transform Edge Propagator (STEP) faults, produced by the tearing of a subducting plate, allow us to study the development of a transform plate boundary and improve our understanding of both long-term geologic processes and short-term seismic hazards. The 280 km long San Cristobal Trough (SCT), formed by the tearing of the Australia plate as it subducts under the Pacific plate near the Solomon and Vanuatu subduction zones, shows along-strike variations in earthquake behaviors. The segment of the SCT closest to the tear rarely hosts earthquakes > Mw 6, whereas the SCT sections more than 80 - 100 km from the tear experience Mw7 earthquakes with repeated rupture along the same segments. To understand the effect of cumulative displacement on SCT seismicity, we analyze b-values, centroid-time delays and corner frequencies of the SCT earthquakes. We use the spectral ratio method based on Empirical Green's Functions (eGfs) to isolate source effects from propagation and site effects. We find high b-values along the SCT closest to the tear with values decreasing with distance before finally increasing again towards the far end of the SCT. Centroid time-delays for the Mw 7 strike-slip earthquakes increase with distance from the tear, but corner frequency estimates for a recent sequence of Mw 7 earthquakes are approximately equal, indicating a growing complexity in earthquake behavior with distance from the tear due to a displacement-driven transform boundary development process (see figure). The increasing complexity possibly stems from the earthquakes along the eastern SCT rupturing through multiple asperities resulting in multiple moment pulses. If not for the bounding Vanuatu subduction zone at the far end of the SCT, the eastern SCT section, which has experienced the most displacement, might be capable of hosting larger earthquakes. When assessing the seismic hazard of other STEP faults, cumulative fault displacement should be considered a key input in

  19. Electromagnetic Manifestation of Earthquakes

    OpenAIRE

    Uvarov Vladimir

    2017-01-01

    In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  20. Electromagnetic Manifestation of Earthquakes

    Directory of Open Access Journals (Sweden)

    Uvarov Vladimir

    2017-01-01

    Full Text Available In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.

  1. Perspectives on cumulative risks and impacts.

    Science.gov (United States)

    Faust, John B

    2010-01-01

    Cumulative risks and impacts have taken on different meanings in different regulatory and programmatic contexts at federal and state government levels. Traditional risk assessment methodologies, with considerable limitations, can provide a framework for the evaluation of cumulative risks from chemicals. Under an environmental justice program in California, cumulative impacts are defined to include exposures, public health effects, or environmental effects in a geographic area from the emission or discharge of environmental pollution from all sources, through all media. Furthermore, the evaluation of these effects should take into account sensitive populations and socioeconomic factors where possible and to the extent data are available. Key aspects to this potential approach include the consideration of exposures (versus risk), socioeconomic factors, the geographic or community-level assessment scale, and the inclusion of not only health effects but also environmental effects as contributors to impact. Assessments of this type extend the boundaries of the types of information that toxicologists generally provide for risk management decisions.

  2. Cumulative processes and quark distribution in nuclei

    International Nuclear Information System (INIS)

    Kondratyuk, L.; Shmatikov, M.

    1984-01-01

    Assuming existence of multiquark (mainly 12q) bags in nuclei the spectra of cumulative nucleons and mesons produced in high-energy particle-nucleus collisions are discussed. The exponential form of quark momentum distribution in 12q-bag (agreeing well with the experimental data on lepton-nucleus interactions at large q 2 ) is shown to result in quasi-exponential distribution of cumulative particles over the light-cone variable αsub(B). The dependence of f(αsub(B); psub(perpendicular)) (where psub(perpendicular) is the transverse momentum of the bag) upon psub(perpendicular) is considered. The yields of cumulative resonances as well as effects related to the u- and d-quark distributions in N > Z nuclei being different are dicscussed

  3. Charles Darwin's earthquake reports

    Science.gov (United States)

    Galiev, Shamil

    2010-05-01

    As it is the 200th anniversary of Darwin's birth, 2009 has also been marked as 170 years since the publication of his book Journal of Researches. During the voyage Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great earthquake, which demolished hundreds of buildings, killing and injuring many people. Land was waved, lifted, and cracked, volcanoes awoke and giant ocean waves attacked the coast. Darwin was the first geologist to observe and describe the effects of the great earthquake during and immediately after. These effects sometimes repeated during severe earthquakes; but great earthquakes, like Chile 1835, and giant earthquakes, like Chile 1960, are rare and remain completely unpredictable. This is one of the few areas of science, where experts remain largely in the dark. Darwin suggested that the effects were a result of ‘ …the rending of strata, at a point not very deep below the surface of the earth…' and ‘…when the crust yields to the tension, caused by its gradual elevation, there is a jar at the moment of rupture, and a greater movement...'. Darwin formulated big ideas about the earth evolution and its dynamics. These ideas set the tone for the tectonic plate theory to come. However, the plate tectonics does not completely explain why earthquakes occur within plates. Darwin emphasised that there are different kinds of earthquakes ‘...I confine the foregoing observations to the earthquakes on the coast of South America, or to similar ones, which seem generally to have been accompanied by elevation of the land. But, as we know that subsidence has gone on in other quarters of the world, fissures must there have been formed, and therefore earthquakes...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). These thoughts agree with results of the last publications (see Nature 461, 870-872; 636-639 and 462, 42-43; 87-89). About 200 years ago Darwin gave oneself airs by the

  4. Cumulative Culture and Future Thinking: Is Mental Time Travel a Prerequisite to Cumulative Cultural Evolution?

    Science.gov (United States)

    Vale, G. L.; Flynn, E. G.; Kendal, R. L.

    2012-01-01

    Cumulative culture denotes the, arguably, human capacity to build on the cultural behaviors of one's predecessors, allowing increases in cultural complexity to occur such that many of our cultural artifacts, products and technologies have progressed beyond what a single individual could invent alone. This process of cumulative cultural evolution…

  5. EXAFS cumulants of CdSe

    International Nuclear Information System (INIS)

    Diop, D.

    1997-04-01

    EXAFS functions had been extracted from measurements on the K edge of Se at different temperatures between 20 and 300 K. The analysis of the EXAFS of the filtered first two shells has been done in the wavevector range laying between 2 and 15.5 A -1 in terms of the cumulants of the effective distribution of distances. The cumulants C 3 and C 4 obtained from the phase difference and the amplitude ratio methods have shown the anharmonicity in the vibrations of atoms around their equilibrium position. (author). 13 refs, 3 figs

  6. Cumulative effects of wind turbines. A guide to assessing the cumulative effects of wind energy development

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This guidance provides advice on how to assess the cumulative effects of wind energy developments in an area and is aimed at developers, planners, and stakeholders interested in the development of wind energy in the UK. The principles of cumulative assessment, wind energy development in the UK, cumulative assessment of wind energy development, and best practice conclusions are discussed. The identification and assessment of the cumulative effects is examined in terms of global environmental sustainability, local environmental quality and socio-economic activity. Supplementary guidance for assessing the principle cumulative effects on the landscape, on birds, and on the visual effect is provided. The consensus building approach behind the preparation of this guidance is outlined in the annexes of the report.

  7. Nowcasting Earthquakes and Tsunamis

    Science.gov (United States)

    Rundle, J. B.; Turcotte, D. L.

    2017-12-01

    The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk

  8. Crustal stress evolution of last 700 years in North China and earthquake occurrences

    Science.gov (United States)

    Wan, Y.; Shen, Z.; Gan, W.; Li, T.; Zeng, Y.

    2004-12-01

    We simulate the evolution process of cumulative Coulomb failure stress change (Δ CFS) in North China since 1303, manifested by secular tectonic stress loading and occurrence of large earthquakes. Secular tectonic stress loading is averaged from crustal strain rates derived from GPS. Fault rupture parameters of historical earthquakes are estimated as follows: the earthquake rupture length and the amount of slip are derived based on their statistical relationships with the earthquake intensity distribution and magnitude, calibrated using parameters of instrumental measured contemporary earthquakes. The earthquake rake angle is derived based on geologically determined fault setting parameters and seismically estimated orientation of regional tectonic stresses. Assuming a layered visco-elastic medium, we calculate stress evolution resulted from secular tectonic loading and coseismic and postseismic deformation. 49 M¡Y6.5 earthquakes occurred in North China since 1303. Statistics shows that 39 out of the 48 subsequent events were triggered by positive Δ CFS, yielding a triggering rate of 81.3%. The triggering rate for M¡Y5 earthquakes after the 1976 Tangshan earthquake is 82.1%. The triggering rate is up to 90% if corrections are made for some aftershocks which were wrongly identified as occurred in stress shadow zones because of errors in parameter estimates of historical earthquakes. Our study shows very high correlation between positive Δ CFS and earthquake occurrences. Relatively high Δ CFS in North China at present time is concentrated around the Bohai Sea, the west segment of the Northern Qinling fault, western end of the Zhangjiakou-Bohai seismic zone, and the Taiyuan basin in Shanxi rift zone, suggesting relatively higher earthquake potential in these areas.

  9. Indoor radon and earthquake

    International Nuclear Information System (INIS)

    Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.

    2004-01-01

    For the first time on the basis of the Spitak earthquake of December 1988 (Armenia, December 1988) experience it is found out that the earthquake causes intensive and prolonged radon splashes which, rapidly dispersing in the open space of close-to-earth atmosphere, are contrastingly displayed in covered premises (dwellings, schools, kindergartens) even if they are at considerable distance from the earthquake epicenter, and this multiplies the radiation influence on the population. The interval of splashes includes the period from the first fore-shock to the last after-shock, i.e. several months. The area affected by radiation is larger vs. Armenia's territory. The scale of this impact on population is 12 times higher than the number of people injured in Spitak, Leninakan and other settlements (toll of injured - 25 000 people, radiation-induced diseases in people - over 300 000). The influence of radiation directly correlates with the earthquake force. Such a conclusion is underpinned by indoor radon monitoring data for Yerevan since 1987 (120 km from epicenter) 5450 measurements and multivariate analysis with identification of cause-and-effect linkages between geo dynamics of indoor radon under stable and conditions of Earth crust, behavior of radon in different geological mediums during earthquakes, levels of room radon concentrations and effective equivalent dose of radiation impact of radiation dose on health and statistical data on public health provided by the Ministry of Health. The following hitherto unexplained facts can be considered as consequences of prolonged radiation influence on human organism: long-lasting state of apathy and indifference typical of the population of Armenia during the period of more than a year after the earthquake, prevalence of malignant cancer forms in disaster zones, dominating lung cancer and so on. All urban territories of seismically active regions are exposed to the threat of natural earthquake-provoked radiation influence

  10. Multiparty correlation measure based on the cumulant

    International Nuclear Information System (INIS)

    Zhou, D. L.; Zeng, B.; Xu, Z.; You, L.

    2006-01-01

    We propose a genuine multiparty correlation measure for a multiparty quantum system as the trace norm of the cumulant of the state. The legitimacy of our multiparty correlation measure is explicitly demonstrated by proving it satisfies the five basic conditions required for a correlation measure. As an application we construct an efficient algorithm for the calculation of our measures for all stabilizer states

  11. Decision analysis with cumulative prospect theory.

    Science.gov (United States)

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  12. Cumulative watershed effects: a research perspective

    Science.gov (United States)

    Leslie M. Reid; Robert R. Ziemer

    1989-01-01

    A cumulative watershed effect (CWE) is any response to multiple land-use activities that is caused by, or results in, altered watershed function. The CWE issue is politically defined, as is the significance of particular impacts. But the processes generating CWEs are the traditional focus of geomorphology and ecology, and have thus been studied for decades. The CWE...

  13. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  14. An evaluation paradigm for cumulative impact analysis

    Science.gov (United States)

    Stakhiv, Eugene Z.

    1988-09-01

    Cumulative impact analysis is examined from a conceptual decision-making perspective, focusing on its implicit and explicit purposes as suggested within the policy and procedures for environmental impact analysis of the National Environmental Policy Act of 1969 (NEPA) and its implementing regulations. In this article it is also linked to different evaluation and decision-making conventions, contrasting a regulatory context with a comprehensive planning framework. The specific problems that make the application of cumulative impact analysis a virtually intractable evaluation requirement are discussed in connection with the federal regulation of wetlands uses. The relatively familiar US Army Corps of Engineers' (the Corps) permit program, in conjunction with the Environmental Protection Agency's (EPA) responsibilities in managing its share of the Section 404 regulatory program requirements, is used throughout as the realistic context for highlighting certain pragmatic evaluation aspects of cumulative impact assessment. To understand the purposes of cumulative impact analysis (CIA), a key distinction must be made between the implied comprehensive and multiobjective evaluation purposes of CIA, promoted through the principles and policies contained in NEPA, and the more commonly conducted and limited assessment of cumulative effects (ACE), which focuses largely on the ecological effects of human actions. Based on current evaluation practices within the Corps' and EPA's permit programs, it is shown that the commonly used screening approach to regulating wetlands uses is not compatible with the purposes of CIA, nor is the environmental impact statement (EIS) an appropriate vehicle for evaluating the variety of objectives and trade-offs needed as part of CIA. A heuristic model that incorporates the basic elements of CIA is developed, including the idea of trade-offs among social, economic, and environmental protection goals carried out within the context of environmental

  15. Rupture, waves and earthquakes.

    Science.gov (United States)

    Uenishi, Koji

    2017-01-01

    Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.

  16. Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.

    Science.gov (United States)

    Buydos, John F., Comp.

    An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…

  17. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data

  18. Earthquakes; May-June 1982

    Science.gov (United States)

    Person, W.J.

    1982-01-01

    There were four major earthquakes (7.0-7.9) during this reporting period: two struck in Mexico, one in El Salvador, and one in teh Kuril Islands. Mexico, El Salvador, and China experienced fatalities from earthquakes.

  19. Aftershock Characteristics as a Means of Discriminating Explosions from Earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Walter, W R

    2009-05-20

    The behavior of aftershock sequences around the Nevada Test Site in the southern Great Basin is characterized as a potential discriminant between explosions and earthquakes. The aftershock model designed by Reasenberg and Jones (1989, 1994) allows for a probabilistic statement of earthquake-like aftershock behavior at any time after the mainshock. We use this model to define two types of aftershock discriminants. The first defines M{sub X}, or the minimum magnitude of an aftershock expected within a given duration after the mainshock with probability X. Of the 67 earthquakes with M > 4 in the study region, 63 of them produce an aftershock greater than M{sub 99} within the first seven days after a mainshock. This is contrasted with only six of 93 explosions with M > 4 that produce an aftershock greater than M{sub 99} for the same period. If the aftershock magnitude threshold is lowered and the M{sub 90} criteria is used, then no explosions produce an aftershock greater than M{sub 90} for durations that end more than 17 days after the mainshock. The other discriminant defines N{sub X}, or the minimum cumulative number of aftershocks expected for given time after the mainshock with probability X. Similar to the aftershock magnitude discriminant, five earthquakes do not produce more aftershocks than N{sub 99} within 7 days after the mainshock. However, within the same period all but one explosion produce less aftershocks then N{sub 99}. One explosion is added if the duration is shortened to two days after than mainshock. The cumulative number aftershock discriminant is more reliable, especially at short durations, but requires a low magnitude of completeness for the given earthquake catalog. These results at NTS are quite promising and should be evaluated at other nuclear test sites to understand the effects of differences in the geologic setting and nuclear testing practices on its performance.

  20. Uncertainty analysis technique of dynamic response and cumulative damage properties of piping system

    International Nuclear Information System (INIS)

    Suzuki, Kohei; Aoki, Shigeru; Hara, Fumio; Hanaoka, Masaaki; Yamashita, Tadashi.

    1982-01-01

    It is a technologically important subject to establish the method of uncertainty analysis statistically examining the variation of the earthquake response and damage properties of equipment and piping system due to the change of input load and the parameters of structural system, for evaluating the aseismatic capability and dynamic structural reliability of these systems. The uncertainty in the response and damage properties when equipment and piping system are subjected to excessive vibration load is mainly dependent on the irregularity of acting input load such as the unsteady vibration of earthquakes, and structural uncertainty in forms and dimensions. This study is the basic one to establish the method for evaluating the uncertainty in the cumulative damage property at the time of resonant vibration of piping system due to the disperse of structural parameters with a simple model. First, the piping models with simple form were broken by resonant vibration, and the uncertainty in the cumulative damage property was evaluated. Next, the response analysis using an elasto-plastic mechanics model was performed by numerical simulation. Finally, the method of uncertainty analysis for response and damage properties by the perturbation method utilizing equivalent linearization was proposed, and its propriety was proved. (Kako, I.)

  1. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  2. Sensing the earthquake

    Science.gov (United States)

    Bichisao, Marta; Stallone, Angela

    2017-04-01

    Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.

  3. Turkish Children's Ideas about Earthquakes

    Science.gov (United States)

    Simsek, Canan Lacin

    2007-01-01

    Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…

  4. Earthquakes, May-June 1991

    Science.gov (United States)

    Person, W.J.

    1992-01-01

    One major earthquake occurred during this reporting period. This was a magntidue 7.1 in Indonesia (Minahassa Peninsula) on June 20. Earthquake-related deaths were reported in the Western Caucasus (Georgia, USSR) on May 3 and June 15. One earthquake-related death was also reported El Salvador on June 21. 

  5. Organizational changes at Earthquakes & Volcanoes

    Science.gov (United States)

    Gordon, David W.

    1992-01-01

    Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).

  6. The 1976 Tangshan earthquake

    Science.gov (United States)

    Fang, Wang

    1979-01-01

    The Tangshan earthquake of 1976 was one of the largest earthquakes in recent years. It occurred on July 28 at 3:42 a.m, Beijing (Peking) local time, and had magnitude 7.8, focal depth of 15 kilometers, and an epicentral intensity of XI on the New Chinese Seismic Intensity Scale; it caused serious damage and loss of life in this densely populated industrial city. Now, with the help of people from all over China, the city of Tangshan is being rebuild. 

  7. [Earthquakes in El Salvador].

    Science.gov (United States)

    de Ville de Goyet, C

    2001-02-01

    The Pan American Health Organization (PAHO) has 25 years of experience dealing with major natural disasters. This piece provides a preliminary review of the events taking place in the weeks following the major earthquakes in El Salvador on 13 January and 13 February 2001. It also describes the lessons that have been learned over the last 25 years and the impact that the El Salvador earthquakes and other disasters have had on the health of the affected populations. Topics covered include mass-casualties management, communicable diseases, water supply, managing donations and international assistance, damages to the health-facilities infrastructure, mental health, and PAHO's role in disasters.

  8. Frictional heating processes during laboratory earthquakes

    Science.gov (United States)

    Aubry, J.; Passelegue, F. X.; Deldicque, D.; Lahfid, A.; Girault, F.; Pinquier, Y.; Escartin, J.; Schubnel, A.

    2017-12-01

    Frictional heating during seismic slip plays a crucial role in the dynamic of earthquakes because it controls fault weakening. This study proposes (i) to image frictional heating combining an in-situ carbon thermometer and Raman microspectrometric mapping, (ii) to combine these observations with fault surface roughness and heat production, (iii) to estimate the mechanical energy dissipated during laboratory earthquakes. Laboratory earthquakes were performed in a triaxial oil loading press, at 45, 90 and 180 MPa of confining pressure by using saw-cut samples of Westerly granite. Initial topography of the fault surface was +/- 30 microns. We use a carbon layer as a local temperature tracer on the fault plane and a type K thermocouple to measure temperature approximately 6mm away from the fault surface. The thermocouple measures the bulk temperature of the fault plane while the in-situ carbon thermometer images the temperature production heterogeneity at the micro-scale. Raman microspectrometry on amorphous carbon patch allowed mapping the temperature heterogeneities on the fault surface after sliding overlaid over a few micrometers to the final fault roughness. The maximum temperature achieved during laboratory earthquakes remains high for all experiments but generally increases with the confining pressure. In addition, the melted surface of fault during seismic slip increases drastically with confining pressure. While melting is systematically observed, the strength drop increases with confining pressure. These results suggest that the dynamic friction coefficient is a function of the area of the fault melted during stick-slip. Using the thermocouple, we inverted the heat dissipated during each event. We show that for rough faults under low confining pressure, less than 20% of the total mechanical work is dissipated into heat. The ratio of frictional heating vs. total mechanical work decreases with cumulated slip (i.e. number of events), and decreases with

  9. Sharing a quota on cumulative carbon emissions

    International Nuclear Information System (INIS)

    Raupach, Michael R.; Davis, Steven J.; Peters, Glen P.; Andrew, Robbie M.; Canadell, Josep G.; Ciais, Philippe

    2014-01-01

    Any limit on future global warming is associated with a quota on cumulative global CO 2 emissions. We translate this global carbon quota to regional and national scales, on a spectrum of sharing principles that extends from continuation of the present distribution of emissions to an equal per-capita distribution of cumulative emissions. A blend of these endpoints emerges as the most viable option. For a carbon quota consistent with a 2 C warming limit (relative to pre-industrial levels), the necessary long-term mitigation rates are very challenging (typically over 5% per year), both because of strong limits on future emissions from the global carbon quota and also the likely short-term persistence in emissions growth in many regions. (authors)

  10. Complexity and demographic explanations of cumulative culture.

    Science.gov (United States)

    Querbes, Adrien; Vaesen, Krist; Houkes, Wybo

    2014-01-01

    Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  11. Complexity and demographic explanations of cumulative culture.

    Directory of Open Access Journals (Sweden)

    Adrien Querbes

    Full Text Available Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.

  12. Conceptual models for cumulative risk assessment.

    Science.gov (United States)

    Linder, Stephen H; Sexton, Ken

    2011-12-01

    In the absence of scientific consensus on an appropriate theoretical framework, cumulative risk assessment and related research have relied on speculative conceptual models. We argue for the importance of theoretical backing for such models and discuss 3 relevant theoretical frameworks, each supporting a distinctive "family" of models. Social determinant models postulate that unequal health outcomes are caused by structural inequalities; health disparity models envision social and contextual factors acting through individual behaviors and biological mechanisms; and multiple stressor models incorporate environmental agents, emphasizing the intermediary role of these and other stressors. The conclusion is that more careful reliance on established frameworks will lead directly to improvements in characterizing cumulative risk burdens and accounting for disproportionate adverse health effects.

  13. Childhood Cumulative Risk and Later Allostatic Load

    DEFF Research Database (Denmark)

    Doan, Stacey N; Dich, Nadya; Evans, Gary W

    2014-01-01

    State, followed for 8 years (between the ages 9 and 17). Poverty- related stress was computed using the cumulative risk approach, assessing stressors across 9 domains, including environmental, psychosocial, and demographic factors. Allostatic load captured a range of physiological responses, including......Objective: The present study investigated the long-term impact of exposure to poverty-related stressors during childhood on allostatic load, an index of physiological dysregulation, and the potential mediating role of substance use. Method: Participants (n = 162) were rural children from New York...... cardiovascular, hypothalamic pituitary adrenal axis, sympathetic adrenal medullary system, and metabolic activity. Smoking and alcohol/drug use were tested as mediators of the hypothesized childhood risk-adolescent allostatic load relationship. Results: Cumulative risk exposure at age 9 predicted increases...

  14. Earthquake Culture: A Significant Element in Earthquake Disaster Risk Assessment and Earthquake Disaster Risk Management

    OpenAIRE

    Ibrion, Mihaela

    2018-01-01

    This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...

  15. The mechanism of earthquake

    Science.gov (United States)

    Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    The physical mechanism of earthquake remains a challenging issue to be clarified. Seismologists used to attribute shallow earthquake to the elastic rebound of crustal rocks. The seismic energy calculated following the elastic rebound theory and with the data of experimental results upon rocks, however, shows a large discrepancy with measurement — a fact that has been dubbed as “the heat flow paradox”. For the intermediate-focus and deep-focus earthquakes, both occurring in the region of the mantle, there is not reasonable explanation either. This paper will discuss the physical mechanism of earthquake from a new perspective, starting from the fact that both the crust and the mantle are discrete collective system of matters with slow dynamics, as well as from the basic principles of physics, especially some new concepts of condensed matter physics emerged in the recent years. (1) Stress distribution in earth’s crust: Without taking the tectonic force into account, according to the rheological principle of “everything flows”, the normal stress and transverse stress must be balanced due to the effect of gravitational pressure over a long period of time, thus no differential stress in the original crustal rocks is to be expected. The tectonic force is successively transferred and accumulated via stick-slip motions of rock blocks to squeeze the fault gouge and then exerted upon other rock blocks. The superposition of such additional lateral tectonic force and the original stress gives rise to the real-time stress in crustal rocks. The mechanical characteristics of fault gouge are different from rocks as it consists of granular matters. The elastic moduli of the fault gouges are much less than those of rocks, and they become larger with increasing pressure. This peculiarity of the fault gouge leads to a tectonic force increasing with depth in a nonlinear fashion. The distribution and variation of the tectonic stress in the crust are specified. (2) The

  16. Fuzzy set theory for cumulative trauma prediction

    OpenAIRE

    Fonseca, Daniel J.; Merritt, Thomas W.; Moynihan, Gary P.

    2001-01-01

    A widely used fuzzy reasoning algorithm was modified and implemented via an expert system to assess the potential risk of employee repetitive strain injury in the workplace. This fuzzy relational model, known as the Priority First Cover Algorithm (PFC), was adapted to describe the relationship between 12 cumulative trauma disorders (CTDs) of the upper extremity, and 29 identified risk factors. The algorithm, which finds a suboptimal subset from a group of variables based on the criterion of...

  17. Sikap Kerja Duduk Terhadap Cumulative Trauma Disorder

    OpenAIRE

    Rahmawati, Yulita; Sugiharto, -

    2011-01-01

    Permasalahan yang diteliti adalah adakah hubungan antara sikap kerja duduk dengan kejadian Cumulative Trauma Disorder (CTD) pada pekerja bagian pengamplasan di PT. Geromar Jepara. Tujuan yang ingin dicapai adalah untuk mengetahui hubungan antara sikap kerja duduk dengan kejadian CTD pada pekerja bagian pengamplasan. Metode penelitian ini bersifat explanatory dengan menggunakan pendekatan belah lintang. Populasi dalam penelitian ini adalah pekerja bagian pengamplasan sebanyak 30 orang. Teknik ...

  18. Power Reactor Docket Information. Annual cumulation (citations)

    International Nuclear Information System (INIS)

    1977-12-01

    An annual cumulation of the citations to the documentation associated with civilian nuclear power plants is presented. This material is that which is submitted to the U.S. Nuclear Regulatory Commission in support of applications for construction and operating licenses. Citations are listed by Docket number in accession number sequence. The Table of Contents is arranged both by Docket number and by nuclear power plant name

  19. Cumulative Effect of Depression on Dementia Risk

    OpenAIRE

    Olazarán, J.; Trincado, R.; Bermejo-Pareja, F.

    2013-01-01

    Objective. To analyze a potential cumulative effect of life-time depression on dementia and Alzheimer's disease (AD), with control of vascular factors (VFs). Methods. This study was a subanalysis of the Neurological Disorders in Central Spain (NEDICES) study. Past and present depression, VFs, dementia status, and dementia due to AD were documented at study inception. Dementia status was also documented after three years. Four groups were created according to baseline data: never depression (n...

  20. Cumulative release to the accessible environment

    International Nuclear Information System (INIS)

    Kanehiro, B.

    1985-01-01

    The Containment and Isolation Working Group considered issues related to the postclosure behavior of repositories in crystalline rock. This working group was further divided into subgroups to consider the progress since the 1978 GAIN Symposium and identify research needs in the individual areas of regional ground-water flow, ground-water travel time, fractional release, and cumulative release. The analysis and findings of the Fractional Release Subgroup are presented

  1. Repetition of large stress drop earthquakes on Wairarapa fault, New Zealand, revealed by LiDAR data

    Science.gov (United States)

    Delor, E.; Manighetti, I.; Garambois, S.; Beaupretre, S.; Vitard, C.

    2013-12-01

    We have acquired high-resolution LiDAR topographic data over most of the onland trace of the 120 km-long Wairarapa strike-slip fault, New Zealand. The Wairarapa fault broke in a large earthquake in 1855, and this historical earthquake is suggested to have produced up to 18 m of lateral slip at the ground surface. This would make this earthquake a remarkable event having produced a stress drop much higher than commonly observed on other earthquakes worldwide. The LiDAR data allowed us examining the ground surface morphology along the fault at statistical analysis of the cumulative offsets per segment reveals that the alluvial morphology has well recorded, at every step along the fault, no more than a few (3-6), well distinct cumulative slips, all lower than 80 m. Plotted along the entire fault, the statistically defined cumulative slip values document four, fairly continuous slip profiles that we attribute to the four most recent large earthquakes on the Wairarapa fault. The four slip profiles have a roughly triangular and asymmetric envelope shape that is similar to the coseismic slip distributions described for most large earthquakes worldwide. The four slip profiles have their maximum slip at the same place, in the northeastern third of the fault trace. The maximum slips vary from one event to another in the range 7-15 m; the most recent 1855 earthquake produced a maximum coseismic slip of 15 × 2 m at the ground surface. Our results thus confirm that the Wairarapa fault breaks in remarkably large stress drop earthquakes. Those repeating large earthquakes share both similar (rupture length, slip-length distribution, location of maximum slip) and distinct (maximum slip amplitudes) characteristics. Furthermore, the seismic behavior of the Wairarapa fault is markedly different from that of nearby large strike-slip faults (Wellington, Hope). The reasons for those differences in rupture behavior might reside in the intrinsic properties of the broken faults, especially

  2. EPA Workshop on Epigenetics and Cumulative Risk ...

    Science.gov (United States)

    Agenda Download the Workshop Agenda (PDF) The workshop included presentations and discussions by scientific experts pertaining to three topics (i.e., epigenetic changes associated with diverse stressors, key science considerations in understanding epigenetic changes, and practical application of epigenetic tools to address cumulative risks from environmental stressors), to address several questions under each topic, and included an opportunity for attendees to participate in break-out groups, provide comments and ask questions. Workshop Goals The workshop seeks to examine the opportunity for use of aggregate epigenetic change as an indicator in cumulative risk assessment for populations exposed to multiple stressors that affect epigenetic status. Epigenetic changes are specific molecular changes around DNA that alter expression of genes. Epigenetic changes include DNA methylation, formation of histone adducts, and changes in micro RNAs. Research today indicates that epigenetic changes are involved in many chronic diseases (cancer, cardiovascular disease, obesity, diabetes, mental health disorders, and asthma). Research has also linked a wide range of stressors including pollution and social factors with occurrence of epigenetic alterations. Epigenetic changes have the potential to reflect impacts of risk factors across multiple stages of life. Only recently receiving attention is the nexus between the factors of cumulative exposure to environmental

  3. Higher order cumulants in colorless partonic plasma

    Energy Technology Data Exchange (ETDEWEB)

    Cherif, S. [Sciences and Technologies Department, University of Ghardaia, Ghardaia, Algiers (Algeria); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ahmed, M. A. A. [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Department of Physics, Taiz University in Turba, Taiz (Yemen); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria); Ladrem, M., E-mail: mladrem@yahoo.fr [Department of Physics, College of Science, Taibah University Al-Madinah Al-Mounawwarah KSA (Saudi Arabia); Laboratoire de Physique et de Mathématiques Appliquées (LPMA), ENS-Kouba (Bachir El-Ibrahimi), Algiers (Algeria)

    2016-06-10

    Any physical system considered to study the QCD deconfinement phase transition certainly has a finite volume, so the finite size effects are inevitably present. This renders the location of the phase transition and the determination of its order as an extremely difficult task, even in the simplest known cases. In order to identify and locate the colorless QCD deconfinement transition point in finite volume T{sub 0}(V), a new approach based on the finite-size cumulant expansion of the order parameter and the ℒ{sub m,n}-Method is used. We have shown that both cumulants of higher order and their ratios, associated to the thermodynamical fluctuations of the order parameter, in QCD deconfinement phase transition behave in a particular enough way revealing pronounced oscillations in the transition region. The sign structure and the oscillatory behavior of these in the vicinity of the deconfinement phase transition point might be a sensitive probe and may allow one to elucidate their relation to the QCD phase transition point. In the context of our model, we have shown that the finite volume transition point is always associated to the appearance of a particular point in whole higher order cumulants under consideration.

  4. Cumulative irritation potential of topical retinoid formulations.

    Science.gov (United States)

    Leyden, James J; Grossman, Rachel; Nighland, Marge

    2008-08-01

    Localized irritation can limit treatment success with topical retinoids such as tretinoin and adapalene. The factors that influence irritant reactions have been shown to include individual skin sensitivity, the particular retinoid and concentration used, and the vehicle formulation. To compare the cutaneous tolerability of tretinoin 0.04% microsphere gel (TMG) with that of adapalene 0.3% gel and a standard tretinoin 0.025% cream. The results of 2 randomized, investigator-blinded studies of 2 to 3 weeks' duration, which utilized a split-face method to compare cumulative irritation scores induced by topical retinoids in subjects with healthy skin, were combined. Study 1 compared TMG 0.04% with adapalene 0.3% gel over 2 weeks, while study 2 compared TMG 0.04% with tretinoin 0.025% cream over 3 weeks. In study 1, TMG 0.04% was associated with significantly lower cumulative scores for erythema, dryness, and burning/stinging than adapalene 0.3% gel. However, in study 2, there were no significant differences in cumulative irritation scores between TMG 0.04% and tretinoin 0.025% cream. Measurements of erythema by a chromameter showed no significant differences between the test formulations in either study. Cutaneous tolerance of TMG 0.04% on the face was superior to that of adapalene 0.3% gel and similar to that of a standard tretinoin cream containing a lower concentration of the drug (0.025%).

  5. Rapid estimation of the economic consequences of global earthquakes

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2011-01-01

    to reduce this time gap to more rapidly and effectively mobilize response. We present here a procedure to rapidly and approximately ascertain the economic impact immediately following a large earthquake anywhere in the world. In principle, the approach presented is similar to the empirical fatality estimation methodology proposed and implemented by Jaiswal and others (2009). In order to estimate economic losses, we need an assessment of the economic exposure at various levels of shaking intensity. The economic value of all the physical assets exposed at different locations in a given area is generally not known and extremely difficult to compile at a global scale. In the absence of such a dataset, we first estimate the total Gross Domestic Product (GDP) exposed at each shaking intensity by multiplying the per-capita GDP of the country by the total population exposed at that shaking intensity level. We then scale the total GDP estimated at each intensity by an exposure correction factor, which is a multiplying factor to account for the disparity between wealth and/or economic assets to the annual GDP. The economic exposure obtained using this procedure is thus a proxy estimate for the economic value of the actual inventory that is exposed to the earthquake. The economic loss ratio, defined in terms of a country-specific lognormal cumulative distribution function of shaking intensity, is derived and calibrated against the losses from past earthquakes. This report describes the development of a country or region-specific economic loss ratio model using economic loss data available for global earthquakes from 1980 to 2007. The proposed model is a potential candidate for directly estimating economic losses within the currently-operating PAGER system. PAGER's other loss models use indirect methods that require substantially more data (such as building/asset inventories, vulnerabilities, and the asset values exposed at the time of earthquake) to implement on a global basis

  6. The EM Earthquake Precursor

    Science.gov (United States)

    Jones, K. B., II; Saxton, P. T.

    2013-12-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two

  7. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  8. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  9. Long-term Postseismic Deformation Following the 1964 Alaska Earthquake

    Science.gov (United States)

    Freymueller, J. T.; Cohen, S. C.; Hreinsdöttir, S.; Suito, H.

    2003-12-01

    Geodetic data provide a rich data set describing the postseismic deformation that followed the 1964 Alaska earthquake (Mw 9.2). This is particularly true for vertical deformation, since tide gauges and leveling surveys provide extensive spatial coverage. Leveling was carried out over all of the major roads of Alaska in 1964-65, and over the last several years we have resurveyed an extensive data set using GPS. Along Turnagain Arm of Cook Inlet, south of Anchorage, a trench-normal profile was surveyed repeatedly over the first decade after the earthquake, and many of these sites have been surveyed with GPS. After using a geoid model to correct for the difference between geometric and orthometric heights, the leveling+GPS surveys reveal up to 1.25 meters of uplift since 1964. The largest uplifts are concentrated in the northern part of the Kenai Peninsula, SW of Turnagain Arm. In some places, steep gradients in the cumulative uplift measurements point to a very shallow source for the deformation. The average 1964-late 1990s uplift rates were substantially higher than the present-day uplift rates, which rarely exceed 10 mm/yr. Both leveling and tide gauge data document a decay in uplift rate over time as the postseismic signal decreases. However, even today the postseismic deformation represents a substantial portion of the total observe deformation signal, illustrating that very long-lived postseismic deformation is an important element of the subduction zone earthquake cycle for the very largest earthquakes. This is in contrast to much smaller events, such as M~8 earthquakes, for which postseismic deformation in many cases decays within a few years. This suggests that the very largest earthquakes may excite different processes than smaller events.

  10. Structure and composition of the plate-boundary slip zone for the 2011 Tohoku-Oki earthquake.

    Science.gov (United States)

    Chester, Frederick M; Rowe, Christie; Ujiie, Kohtaro; Kirkpatrick, James; Regalla, Christine; Remitti, Francesca; Moore, J Casey; Toy, Virginia; Wolfson-Schwehr, Monica; Bose, Santanu; Kameda, Jun; Mori, James J; Brodsky, Emily E; Eguchi, Nobuhisa; Toczko, Sean

    2013-12-06

    The mechanics of great subduction earthquakes are influenced by the frictional properties, structure, and composition of the plate-boundary fault. We present observations of the structure and composition of the shallow source fault of the 2011 Tohoku-Oki earthquake and tsunami from boreholes drilled by the Integrated Ocean Drilling Program Expedition 343 and 343T. Logging-while-drilling and core-sample observations show a single major plate-boundary fault accommodated the large slip of the Tohoku-Oki earthquake rupture, as well as nearly all the cumulative interplate motion at the drill site. The localization of deformation onto a limited thickness (less than 5 meters) of pelagic clay is the defining characteristic of the shallow earthquake fault, suggesting that the pelagic clay may be a regionally important control on tsunamigenic earthquakes.

  11. Surface slip during large Owens Valley earthquakes

    Science.gov (United States)

    Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.

    2016-01-01

    The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.

  12. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction ... with two types of shocks: one type is failure shock, and the other type is damage ...... Theory, methods and applications.

  13. Historical earthquake research in Austria

    Science.gov (United States)

    Hammerl, Christa

    2017-12-01

    Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.

  14. On interference of cumulative proton production mechanisms

    International Nuclear Information System (INIS)

    Braun, M.A.; Vechernin, V.V.

    1993-01-01

    The dynamical picture of the cumulative proton production in hA-collisions by means of diagram analysis with NN interaction described by a non-relativistic NN potential is considered. The contributions of the various mechanisms (spectator, direct and rescattering) for backward hemisphere proton production within the framework of this common approach is calculated. The emphasis is on the comparison of the relative contributions of these mechanisms for various angles, taking into account the interference of these contributions. Comparison with experimental data is also presented. (author)

  15. Preserved cumulative semantic interference despite amnesia

    Directory of Open Access Journals (Sweden)

    Gary Michael Oppenheim

    2015-05-01

    As predicted by Oppenheim et al’s (2010 implicit incremental learning account, WRP’s BCN RTs demonstrated strong (and significant repetition priming and semantic blocking effects (Figure 1. Similar to typical results from neurally intact undergraduates, WRP took longer to name pictures presented in semantically homogeneous blocks than in heterogeneous blocks, an effect that increased with each cycle. This result challenges accounts that ascribe cumulative semantic interference in this task to explicit memory mechanisms, instead suggesting that the effect has the sort of implicit learning bases that are typically spared in hippocampal amnesia.

  16. Is cumulated pyrethroid exposure associated with prediabetes?

    DEFF Research Database (Denmark)

    Hansen, Martin Rune; Jørs, Erik; Lander, Flemming

    2014-01-01

    was to investigate an association between exposure to pyrethroids and abnormal glucose regulation (prediabetes or diabetes). A cross-sectional study was performed among 116 pesticide sprayers from public vector control programs in Bolivia and 92 nonexposed controls. Pesticide exposure (duration, intensity...... pyrethroids, a significant positive trend was observed between cumulative pesticide exposure (total number of hours sprayed) and adjusted OR of abnormal glucose regulation, with OR 14.7 [0.9-235] in the third exposure quintile. The study found a severely increased prevalence of prediabetes among Bolivian...

  17. Precursory earthquakes of the 1943 eruption of Paricutin volcano, Michoacan, Mexico

    Science.gov (United States)

    Yokoyama, I.; de la Cruz-Reyna, S.

    1990-12-01

    Paricutin volcano is a monogenetic volcano whose birth and growth were observed by modern volcanological techniques. At the time of its birth in 1943, the seismic activity in central Mexico was mainly recorded by the Wiechert seismographs at the Tacubaya seismic station in Mexico City about 320 km east of the volcano area. In this paper we aim to find any characteristics of precursory earthquakes of the monogenetic eruption. Though there are limits in the available information, such as imprecise location of hypocenters and lack of earthquake data with magnitudes under 3.0. The available data show that the first precursory earthquake occurred on January 7, 1943, with a magnitude of 4.4. Subsequently, 21 earthquakes ranging from 3.2 to 4.5 in magnitude occurred before the outbreak of the eruption on February 20. The (S - P) durations of the precursory earthquakes do not show any systematic changes within the observational errors. The hypocenters were rather shallow and did not migrate. The precursory earthquakes had a characteristic tectonic signature, which was retained through the whole period of activity. However, the spectra of the P-waves of the Paricutin earthquakes show minor differences from those of tectonic earthquakes. This fact helped in the identification of Paricutin earthquakes. Except for the first shock, the maximum earthquake magnitudes show an increasing tendency with time towards the outbreak. The total seismic energy released by the precursory earthquakes amounted to 2 × 10 19 ergs. Considering that statistically there is a threshold of cumulative seismic energy release (10 17-18ergs) by precursory earthquakes in polygenetic volcanoes erupting after long quiescence, the above cumulative energy is exceptionally large. This suggests that a monogenetic volcano may need much more energy to clear the way of magma passage to the earth surface than a polygenetic one. The magma ascent before the outbreak of Paricutin volcano is interpretable by a model

  18. Earthquake hazard evaluation for Switzerland

    International Nuclear Information System (INIS)

    Ruettener, E.

    1995-01-01

    Earthquake hazard analysis is of considerable importance for Switzerland, a country with moderate seismic activity but high economic values at risk. The evaluation of earthquake hazard, i.e. the determination of return periods versus ground motion parameters, requires a description of earthquake occurrences in space and time. In this study the seismic hazard for major cities in Switzerland is determined. The seismic hazard analysis is based on historic earthquake records as well as instrumental data. The historic earthquake data show considerable uncertainties concerning epicenter location and epicentral intensity. A specific concept is required, therefore, which permits the description of the uncertainties of each individual earthquake. This is achieved by probability distributions for earthquake size and location. Historical considerations, which indicate changes in public earthquake awareness at various times (mainly due to large historical earthquakes), as well as statistical tests have been used to identify time periods of complete earthquake reporting as a function of intensity. As a result, the catalog is judged to be complete since 1878 for all earthquakes with epicentral intensities greater than IV, since 1750 for intensities greater than VI, since 1600 for intensities greater than VIII, and since 1300 for intensities greater than IX. Instrumental data provide accurate information about the depth distribution of earthquakes in Switzerland. In the Alps, focal depths are restricted to the uppermost 15 km of the crust, whereas below the northern Alpine foreland earthquakes are distributed throughout the entire crust (30 km). This depth distribution is considered in the final hazard analysis by probability distributions. (author) figs., tabs., refs

  19. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  20. Chapter 19. Cumulative watershed effects and watershed analysis

    Science.gov (United States)

    Leslie M. Reid

    1998-01-01

    Cumulative watershed effects are environmental changes that are affected by more than.one land-use activity and that are influenced by.processes involving the generation or transport.of water. Almost all environmental changes are.cumulative effects, and almost all land-use.activities contribute to cumulative effects

  1. Original and cumulative prospect theory: a discussion of empirical differences

    NARCIS (Netherlands)

    Wakker, P.P.; Fennema, H.

    1997-01-01

    This note discusses differences between prospect theory and cumulative prospect theory. It shows that cumulative prospect theory is not merely a formal correction of some theoretical problems in prospect theory, but it also gives different predictions. Experiments are described that favor cumulative

  2. Identified EM Earthquake Precursors

    Science.gov (United States)

    Jones, Kenneth, II; Saxton, Patrick

    2014-05-01

    Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for

  3. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  4. Pain after earthquake

    Directory of Open Access Journals (Sweden)

    Angeletti Chiara

    2012-06-01

    Full Text Available Abstract Introduction On 6 April 2009, at 03:32 local time, an Mw 6.3 earthquake hit the Abruzzi region of central Italy causing widespread damage in the City of L Aquila and its nearby villages. The earthquake caused 308 casualties and over 1,500 injuries, displaced more than 25,000 people and induced significant damage to more than 10,000 buildings in the L'Aquila region. Objectives This observational retrospective study evaluated the prevalence and drug treatment of pain in the five weeks following the L'Aquila earthquake (April 6, 2009. Methods 958 triage documents were analysed for patients pain severity, pain type, and treatment efficacy. Results A third of pain patients reported pain with a prevalence of 34.6%. More than half of pain patients reported severe pain (58.8%. Analgesic agents were limited to available drugs: anti-inflammatory agents, paracetamol, and weak opioids. Reduction in verbal numerical pain scores within the first 24 hours after treatment was achieved with the medications at hand. Pain prevalence and characterization exhibited a biphasic pattern with acute pain syndromes owing to trauma occurring in the first 15 days after the earthquake; traumatic pain then decreased and re-surged at around week five, owing to rebuilding efforts. In the second through fourth week, reports of pain occurred mainly owing to relapses of chronic conditions. Conclusions This study indicates that pain is prevalent during natural disasters, may exhibit a discernible pattern over the weeks following the event, and current drug treatments in this region may be adequate for emergency situations.

  5. Cumulative Environmental Management Association : Wood Buffalo Region

    International Nuclear Information System (INIS)

    Friesen, B.

    2001-01-01

    The recently announced oil sands development of the Wood Buffalo Region in Alberta was the focus of this power point presentation. Both mining and in situ development is expected to total $26 billion and 2.6 million barrels per day of bitumen production. This paper described the economic, social and environmental challenges facing the resource development of this region. In addition to the proposed oil sands projects, this region will accommodate the needs of conventional oil and gas production, forestry, building of pipelines and power lines, municipal development, recreation, tourism, mining exploration and open cast mining. The Cumulative Environmental Management Association (CEMA) was inaugurated as a non-profit association in April 2000, and includes 41 members from all sectors. Its major role is to ensure a sustainable ecosystem and to avoid any cumulative impacts on wildlife. Other work underway includes the study of soil and plant species diversity, and the effects of air emissions on human health, wildlife and vegetation. The bioaccumulation of heavy metals and their impacts on surface water and fish is also under consideration to ensure the quality and quantity of surface water and ground water. 3 figs

  6. Fault lubrication during earthquakes.

    Science.gov (United States)

    Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T

    2011-03-24

    The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.

  7. Housing Damage Following Earthquake

    Science.gov (United States)

    1989-01-01

    An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.

  8. Do Earthquakes Shake Stock Markets?

    Science.gov (United States)

    Ferreira, Susana; Karali, Berna

    2015-01-01

    This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.

  9. Earthquake engineering for nuclear facilities

    CERN Document Server

    Kuno, Michiya

    2017-01-01

    This book is a comprehensive compilation of earthquake- and tsunami-related technologies and knowledge for the design and construction of nuclear facilities. As such, it covers a wide range of fields including civil engineering, architecture, geotechnical engineering, mechanical engineering, and nuclear engineering, for the development of new technologies providing greater resistance against earthquakes and tsunamis. It is crucial both for students of nuclear energy courses and for young engineers in nuclear power generation industries to understand the basics and principles of earthquake- and tsunami-resistant design of nuclear facilities. In Part I, "Seismic Design of Nuclear Power Plants", the design of nuclear power plants to withstand earthquakes and tsunamis is explained, focusing on buildings, equipment's, and civil engineering structures. In Part II, "Basics of Earthquake Engineering", fundamental knowledge of earthquakes and tsunamis as well as the dynamic response of structures and foundation ground...

  10. Earthquake resistant design of structures

    International Nuclear Information System (INIS)

    Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun

    1990-02-01

    This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.

  11. The severity of an earthquake

    Science.gov (United States)

    ,

    1997-01-01

    The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused. Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter. Magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake. It is based on the amplitude of the earthquake waves recorded on instruments

  12. Evolution model with a cumulative feedback coupling

    Science.gov (United States)

    Trimper, Steffen; Zabrocki, Knud; Schulz, Michael

    2002-05-01

    The paper is concerned with a toy model that generalizes the standard Lotka-Volterra equation for a certain population by introducing a competition between instantaneous and accumulative, history-dependent nonlinear feedback the origin of which could be a contribution from any kind of mismanagement in the past. The results depend on the sign of that additional cumulative loss or gain term of strength λ. In case of a positive coupling the system offers a maximum gain achieved after a finite time but the population will die out in the long time limit. In this case the instantaneous loss term of strength u is irrelevant and the model exhibits an exact solution. In the opposite case λ<0 the time evolution of the system is terminated in a crash after ts provided u=0. This singularity after a finite time can be avoided if u≠0. The approach may well be of relevance for the qualitative understanding of more realistic descriptions.

  13. Psychometric properties of the Cumulated Ambulation Score

    DEFF Research Database (Denmark)

    Ferriero, Giorgio; Kristensen, Morten T; Invernizzi, Marco

    2018-01-01

    INTRODUCTION: In the geriatric population, independent mobility is a key factor in determining readiness for discharge following acute hospitalization. The Cumulated Ambulation Score (CAS) is a potentially valuable score that allows day-to-day measurements of basic mobility. The CAS was developed...... and validated in older patients with hip fracture as an early postoperative predictor of short-term outcome, but it is also used to assess geriatric in-patients with acute medical illness. Despite the fast- accumulating literature on the CAS, to date no systematic review synthesizing its psychometric properties....... Of 49 studies identified, 17 examined the psychometric properties of the CAS. EVIDENCE SYNTHESIS: Most papers dealt with patients after hip fracture surgery, and only 4 studies assessed the CAS psychometric characteristics also in geriatric in-patients with acute medical illness. Two versions of CAS...

  14. Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake

    Science.gov (United States)

    Jones, Lucile M.

    1994-01-01

    The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.

  15. How complete is the ISC-GEM Global Earthquake Catalog?

    Science.gov (United States)

    Michael, Andrew J.

    2014-01-01

    The International Seismological Centre, in collaboration with the Global Earthquake Model effort, has released a new global earthquake catalog, covering the time period from 1900 through the end of 2009. In order to use this catalog for global earthquake studies, I determined the magnitude of completeness (Mc) as a function of time by dividing the earthquakes shallower than 60 km into 7 time periods based on major changes in catalog processing and data availability and applying 4 objective methods to determine Mc, with uncertainties determined by non-parametric bootstrapping. Deeper events were divided into 2 time periods. Due to differences between the 4 methods, the final Mc was determined subjectively by examining the features that each method focused on in both the cumulative and binned magnitude frequency distributions. The time periods and Mc values for shallow events are: 1900-1917, Mc=7.7; 1918-1939, Mc=7.0; 1940-1954, Mc=6.8; 1955-1963, Mc=6.5; 1964-1975, Mc=6.0; 1976-2003, Mc=5.8; and 2004-2009, Mc=5.7. Using these Mc values for the longest time periods they are valid for (e.g. 1918-2009, 1940-2009,…) the shallow data fits a Gutenberg-Richter distribution with b=1.05 and a=8.3, within 1 standard deviation, with no declustering. The exception is for time periods that include 1900-1917 in which there are only 33 events with M≥ Mc and for those few data b=2.15±0.46. That result calls for further investigations for this time period, ideally having a larger number of earthquakes. For deep events, the results are Mc=7.1 for 1900-1963, although the early data are problematic; and Mc=5.7 for 1964-2009. For that later time period, b=0.99 and a=7.3.

  16. Generation of earthquake signals

    International Nuclear Information System (INIS)

    Kjell, G.

    1994-01-01

    Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report

  17. Earthquakes Threaten Many American Schools

    Science.gov (United States)

    Bailey, Nancy E.

    2010-01-01

    Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…

  18. Make an Earthquake: Ground Shaking!

    Science.gov (United States)

    Savasci, Funda

    2011-01-01

    The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…

  19. Earthquake Catalogue of the Caucasus

    Science.gov (United States)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  20. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  1. The CATDAT damaging earthquakes database

    Directory of Open Access Journals (Sweden)

    J. E. Daniell

    2011-08-01

    Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.

    Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.

    Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.

    Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.

    This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global

  2. The CATDAT damaging earthquakes database

    Science.gov (United States)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  3. Cumulative radiation dose of multiple trauma patients during their hospitalization

    International Nuclear Information System (INIS)

    Wang Zhikang; Sun Jianzhong; Zhao Zudan

    2012-01-01

    Objective: To study the cumulative radiation dose of multiple trauma patients during their hospitalization and to analyze the dose influence factors. Methods: The DLP for CT and DR were retrospectively collected from the patients during June, 2009 and April, 2011 at a university affiliated hospital. The cumulative radiation doses were calculated by summing typical effective doses of the anatomic regions scanned. Results: The cumulative radiation doses of 113 patients were collected. The maximum,minimum and the mean values of cumulative effective doses were 153.3, 16.48 mSv and (52.3 ± 26.6) mSv. Conclusions: Multiple trauma patients have high cumulative radiation exposure. Therefore, the management of cumulative radiation doses should be enhanced. To establish the individualized radiation exposure archives will be helpful for the clinicians and technicians to make decision whether to image again and how to select the imaging parameters. (authors)

  4. 7 CFR 42.132 - Determining cumulative sum values.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Determining cumulative sum values. 42.132 Section 42... Determining cumulative sum values. (a) The parameters for the on-line cumulative sum sampling plans for AQL's... 3 1 2.5 3 1 2 1 (b) At the beginning of the basic inspection period, the CuSum value is set equal to...

  5. Improving cumulative effects assessment in Alberta: Regional strategic assessment

    International Nuclear Information System (INIS)

    Johnson, Dallas; Lalonde, Kim; McEachern, Menzie; Kenney, John; Mendoza, Gustavo; Buffin, Andrew; Rich, Kate

    2011-01-01

    The Government of Alberta, Canada is developing a regulatory framework to better manage cumulative environmental effects from development in the province. A key component of this effort is regional planning, which will lay the primary foundation for cumulative effects management into the future. Alberta Environment has considered the information needs of regional planning and has concluded that Regional Strategic Assessment may offer significant advantages if integrated into the planning process, including the overall improvement of cumulative environmental effects assessment in the province.

  6. Children neglected: Where cumulative risk theory fails.

    Science.gov (United States)

    O'Hara, Mandy; Legano, Lori; Homel, Peter; Walker-Descartes, Ingrid; Rojas, Mary; Laraque, Danielle

    2015-07-01

    Neglected children, by far the majority of children maltreated, experience an environment most deficient in cognitive stimulation and language exchange. When physical abuse co-occurs with neglect, there is more stimulation through negative parent-child interaction, which may lead to better cognitive outcomes, contrary to Cumulative Risk Theory. The purpose of the current study was to assess whether children only neglected perform worse on cognitive tasks than children neglected and physically abused. Utilizing LONGSCAN archived data, 271 children only neglected and 101 children neglected and physically abused in the first four years of life were compared. The two groups were assessed at age 6 on the WPPSI-R vocabulary and block design subtests, correlates of cognitive intelligence. Regression analyses were performed, controlling for additional predictors of poor cognitive outcome, including socioeconomic variables and caregiver depression. Children only neglected scored significantly worse than children neglected and abused on the WPPSI-R vocabulary subtest (p=0.03). The groups did not differ on the block design subtest (p=0.4). This study shows that for neglected children, additional abuse may not additively accumulate risk when considering intelligence outcomes. Children experiencing only neglect may need to be referred for services that address cognitive development, with emphasis on the linguistic environment, in order to best support the developmental challenges of neglected children. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Analysis of Memory Codes and Cumulative Rehearsal in Observational Learning

    Science.gov (United States)

    Bandura, Albert; And Others

    1974-01-01

    The present study examined the influence of memory codes varying in meaningfulness and retrievability and cumulative rehearsal on retention of observationally learned responses over increasing temporal intervals. (Editor)

  8. Landslides triggered by the 1946 Ancash earthquake, Peru

    Science.gov (United States)

    Kampherm, T. S.; Evans, S. G.; Valderrama Murillo, P.

    2009-04-01

    The 1946 M7.3 Ancash Earthquake triggered a large number of landslides in an epicentral area that straddled the Continental Divide of South America in the Andes of Peru. A small number of landslides were described in reconnaissance reports by E. Silgado and Arnold Heim published shortly after the earthquake, but further details of the landslides triggered by the earthquake have not been reported since. Utilising field traverses, aerial photograph interpretation and GIS, our study mapped 45 landslides inferred to have been triggered by the event. 83% were rock avalanches involving Cretaceous limestones interbedded with shales. The five largest rock/debris avalanches occurred at Rio Llama (est. vol. 37 M m3), Suytucocha (est. vol., 13.5 Mm3), Quiches (est. vol. 10.5 Mm3 ), Pelagatos (est. vol. 8 Mm3), and Shundoy (est. vol. 8 Mm3). The Suytucocha, Quiches, and Pelagatos landslides were reported by Silgado and Heim. Rock slope failure was most common on slopes with a southwest aspect, an orientation corresponding to the regional dip direction of major planar structures in the Andean foreland belt (bedding planes and thrust faults). In valleys oriented transverse to the NW-SE structural grain of the epicentral area, south-westerly dipping bedding planes combined with orthogonal joint sets to form numerous wedge failures. Many initial rock slope failures were transformed into rock/debris avalanches by the entrainment of colluvium in their path. At Acobamba, a rock avalanche that transformed into a debris avalanche (est. vol. 4.3 Mm3) overwhelmed a village resulting in the deaths of 217 people. The cumulative volume-frequency plot shows a strong power law relation below a marked rollover, similar in form to that derived for landslides triggered by the 1994 Northridge Earthquake. The total volume of the 45 landslides is approximately 93 Mm3. The data point for the Ancash Earthquake plots near the regression line calculated by Keefer (1994), and modified by Malamud et al

  9. Stochastic evaluation of the dynamic response and the cumulative damage of nuclear power plant piping

    International Nuclear Information System (INIS)

    Suzuki, Kohei; Aoki, Shigeru; Hanaoka, Masaaki

    1981-01-01

    This report deals with a fundamental study concerning an evaluation of uncertainties of the nuclear piping response and cumulative damage under excess-earthquake loadings. The main purposes of this study cover following several problems. (1) Experimental estimation analysis of the uncertainties concerning the dynamic response and the cumulative failure by using piping test model. (2) Numerical simulation analysis by Monte Carlo method under the assumption that relation between restoring force and deformation is characterized by perfectly elasto-plastic one. (Checking the mathematical model.) (3) Development of the conventional uncertainty estimating method by introducing a perturbation technique based on an appropriate equivalently linearized approach. (Checking the estimation technique.) (4) An application of this method to more realistical cases. Through above mentioned procedures some important results are obtained as follows; First, fundamental statistical properties of the natural frequencies and the number of cycle to failure crack initiation are evaluated. Second, the effect of the frequency fluctuation and the yielding fluctuation are estimated and examined through Monte Carlo simulation technique. It has become clear that the yielding fluctuation gives significant effect on the piping power response up to its failure initiation. Finally some results through proposed perturbation technique are discussed. Statistical properties estimated coincide fairly well with those through numerical simulation. (author)

  10. Earthquake Emergency Education in Dushanbe, Tajikistan

    Science.gov (United States)

    Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.

    2010-01-01

    We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…

  11. Determination of Design Basis Earthquake ground motion

    International Nuclear Information System (INIS)

    Kato, Muneaki

    1997-01-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  12. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)

    1997-03-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  13. Physics of Earthquake Rupture Propagation

    Science.gov (United States)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  14. Radon observation for earthquake prediction

    Energy Technology Data Exchange (ETDEWEB)

    Wakita, Hiroshi [Tokyo Univ. (Japan)

    1998-12-31

    Systematic observation of groundwater radon for the purpose of earthquake prediction began in Japan in late 1973. Continuous observations are conducted at fixed stations using deep wells and springs. During the observation period, significant precursory changes including the 1978 Izu-Oshima-kinkai (M7.0) earthquake as well as numerous coseismic changes were observed. At the time of the 1995 Kobe (M7.2) earthquake, significant changes in chemical components, including radon dissolved in groundwater, were observed near the epicentral region. Precursory changes are presumably caused by permeability changes due to micro-fracturing in basement rock or migration of water from different sources during the preparation stage of earthquakes. Coseismic changes may be caused by seismic shaking and by changes in regional stress. Significant drops of radon concentration in groundwater have been observed after earthquakes at the KSM site. The occurrence of such drops appears to be time-dependent, and possibly reflects changes in the regional stress state of the observation area. The absence of radon drops seems to be correlated with periods of reduced regional seismic activity. Experience accumulated over the two past decades allows us to reach some conclusions: 1) changes in groundwater radon do occur prior to large earthquakes; 2) some sites are particularly sensitive to earthquake occurrence; and 3) the sensitivity changes over time. (author)

  15. Earthquake prediction by Kina Method

    International Nuclear Information System (INIS)

    Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.

    2005-01-01

    Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)

  16. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  17. Foreshocks and aftershocks of strong earthquakes in the light of catastrophe theory

    Science.gov (United States)

    Guglielmi, A. V.

    2015-04-01

    In this review, general ideas and specific results from catastrophe theory and the theory of critical phenomena are applied to the analysis of strong earthquakes. Aspects given particular attention are the sharp rise in the fluctuation level, the increased reactivity of dynamical systems in the near-threshold region, and other anomalous phenomena similar to critical opalescence. Given the lack of a sufficiently complete theory of earthquakes, this appears to be a valid approach to the analysis of observations. The study performed brought out some nontrivial properties of a strong-earthquake source that manifest themselves both before and after the main rupture discontinuity forms at the mainshock. In the course of the analysis of the foreshocks and aftershocks, such concepts as the round-the-world seismic echo, the cumulative effect of converging surface waves on the epicentral zone, and global seismicity modulation by Earth's free oscillations are introduced. Further research in this field is likely to be interesting and promising.

  18. Precisely locating the Klamath Falls, Oregon, earthquakes

    Science.gov (United States)

    Qamar, A.; Meagher, K.L.

    1993-01-01

    The Klamath Falls earthquakes on September 20, 1993, were the largest earthquakes centered in Oregon in more than 50 yrs. Only the magnitude 5.75 Milton-Freewater earthquake in 1936, which was centered near the Oregon-Washington border and felt in an area of about 190,000 sq km, compares in size with the recent Klamath Falls earthquakes. Although the 1993 earthquakes surprised many local residents, geologists have long recognized that strong earthquakes may occur along potentially active faults that pass through the Klamath Falls area. These faults are geologically related to similar faults in Oregon, Idaho, and Nevada that occasionally spawn strong earthquakes

  19. Ionospheric phenomena before strong earthquakes

    Directory of Open Access Journals (Sweden)

    A. S. Silina

    2001-01-01

    Full Text Available A statistical analysis of several ionospheric parameters before earthquakes with magnitude M > 5.5 located less than 500 km from an ionospheric vertical sounding station is performed. Ionospheric effects preceding "deep" (depth h > 33 km and "crust" (h 33 km earthquakes were analysed separately. Data of nighttime measurements of the critical frequencies foF2 and foEs, the frequency fbEs and Es-spread at the middle latitude station Dushanbe were used. The frequencies foF2 and fbEs are proportional to the square root of the ionization density at heights of 300 km and 100 km, respectively. It is shown that two days before the earthquakes the values of foF2 averaged over the morning hours (00:00 LT–06:00 LT and of fbEs averaged over the nighttime hours (18:00 LT–06:00 LT decrease; the effect is stronger for the "deep" earthquakes. Analysing the coefficient of semitransparency which characterizes the degree of small-scale turbulence, it was shown that this value increases 1–4 days before "crust" earthquakes, and it does not change before "deep" earthquakes. Studying Es-spread which manifests itself as diffuse Es track on ionograms and characterizes the degree of large-scale turbulence, it was found that the number of Es-spread observations increases 1–3 days before the earthquakes; for "deep" earthquakes the effect is more intensive. Thus it may be concluded that different mechanisms of energy transfer from the region of earthquake preparation to the ionosphere occur for "deep" and "crust" events.

  20. The Pocatello Valley, Idaho, earthquake

    Science.gov (United States)

    Rogers, A. M.; Langer, C.J.; Bucknam, R.C.

    1975-01-01

    A Richter magnitude 6.3 earthquake occurred at 8:31 p.m mountain daylight time on March 27, 1975, near the Utah-Idaho border in Pocatello Valley. The epicenter of the main shock was located at 42.094° N, 112.478° W, and had a focal depth of 5.5 km. This earthquake was the largest in the continental United States since the destructive San Fernando earthquake of February 1971. The main shock was preceded by a magnitude 4.5 foreshock on March 26. 

  1. The threat of silent earthquakes

    Science.gov (United States)

    Cervelli, Peter

    2004-01-01

    Not all earthquakes shake the ground. The so-called silent types are forcing scientists to rethink their understanding of the way quake-prone faults behave. In rare instances, silent earthquakes that occur along the flakes of seaside volcanoes may cascade into monstrous landslides that crash into the sea and trigger towering tsunamis. Silent earthquakes that take place within fault zones created by one tectonic plate diving under another may increase the chance of ground-shaking shocks. In other locations, however, silent slip may decrease the likelihood of destructive quakes, because they release stress along faults that might otherwise seem ready to snap.

  2. USGS Earthquake Program GPS Use Case : Earthquake Early Warning

    Science.gov (United States)

    2015-03-12

    USGS GPS receiver use case. Item 1 - High Precision User (federal agency with Stafford Act hazard alert responsibilities for earthquakes, volcanoes and landslides nationwide). Item 2 - Description of Associated GPS Application(s): The USGS Eart...

  3. EARTHQUAKE-INDUCED DEFORMATION STRUCTURES AND RELATED TO EARTHQUAKE MAGNITUDES

    Directory of Open Access Journals (Sweden)

    Savaş TOPAL

    2003-02-01

    Full Text Available Earthquake-induced deformation structures which are called seismites may helpful to clasify the paleoseismic history of a location and to estimate the magnitudes of the potention earthquakes in the future. In this paper, seismites were investigated according to the types formed in deep and shallow lake sediments. Seismites are observed forms of sand dikes, introduced and fractured gravels and pillow structures in shallow lakes and pseudonodules, mushroom-like silts protruding laminites, mixed layers, disturbed varved lamination and loop bedding in deep lake sediments. Earthquake-induced deformation structures, by benefiting from previous studies, were ordered according to their formations and earthquake magnitudes. In this order, the lowest eartquake's record is loop bedding and the highest one is introduced and fractured gravels in lacustrine deposits.

  4. Cumulative Effect of Depression on Dementia Risk

    Directory of Open Access Journals (Sweden)

    J. Olazarán

    2013-01-01

    Full Text Available Objective. To analyze a potential cumulative effect of life-time depression on dementia and Alzheimer’s disease (AD, with control of vascular factors (VFs. Methods. This study was a subanalysis of the Neurological Disorders in Central Spain (NEDICES study. Past and present depression, VFs, dementia status, and dementia due to AD were documented at study inception. Dementia status was also documented after three years. Four groups were created according to baseline data: never depression (nD, past depression (pD, present depression (prD, and present and past depression (prpD. Logistic regression was used. Results. Data of 1,807 subjects were investigated at baseline (mean age 74.3, 59.3% women, and 1,376 (81.6% subjects were evaluated after three years. The prevalence of dementia at baseline was 6.7%, and dementia incidence was 6.3%. An effect of depression was observed on dementia prevalence (OR [CI 95%] 1.84 [1.01–3.35] for prD and 2.73 [1.08–6.87] for prpD, and on dementia due to AD (OR 1.98 [0.98–3.99] for prD and OR 3.98 [1.48–10.71] for prpD (fully adjusted models, nD as reference. Depression did not influence dementia incidence. Conclusions. Present depression and, particularly, present and past depression are associated with dementia at old age. Multiple mechanisms, including toxic effect of depression on hippocampal neurons, plausibly explain these associations.

  5. Quantitative cumulative biodistribution of antibodies in mice

    Science.gov (United States)

    Yip, Victor; Palma, Enzo; Tesar, Devin B; Mundo, Eduardo E; Bumbaca, Daniela; Torres, Elizabeth K; Reyes, Noe A; Shen, Ben Q; Fielder, Paul J; Prabhu, Saileta; Khawli, Leslie A; Boswell, C Andrew

    2014-01-01

    The neonatal Fc receptor (FcRn) plays an important and well-known role in antibody recycling in endothelial and hematopoietic cells and thus it influences the systemic pharmacokinetics (PK) of immunoglobulin G (IgG). However, considerably less is known about FcRn’s role in the metabolism of IgG within individual tissues after intravenous administration. To elucidate the organ distribution and gain insight into the metabolism of humanized IgG1 antibodies with different binding affinities FcRn, comparative biodistribution studies in normal CD-1 mice were conducted. Here, we generated variants of herpes simplex virus glycoprotein D-specific antibody (humanized anti-gD) with increased and decreased FcRn binding affinity by genetic engineering without affecting antigen specificity. These antibodies were expressed in Chinese hamster ovary cell lines, purified and paired radiolabeled with iodine-125 and indium-111. Equal amounts of I-125-labeled and In-111-labeled antibodies were mixed and intravenously administered into mice at 5 mg/kg. This approach allowed us to measure both the real-time IgG uptake (I-125) and cumulative uptake of IgG and catabolites (In-111) in individual tissues up to 1 week post-injection. The PK and distribution of the wild-type IgG and the variant with enhanced binding for FcRn were largely similar to each other, but vastly different for the rapidly cleared low-FcRn-binding variant. Uptake in individual tissues varied across time, FcRn binding affinity, and radiolabeling method. The liver and spleen emerged as the most concentrated sites of IgG catabolism in the absence of FcRn protection. These data provide an increased understanding of FcRn’s role in antibody PK and catabolism at the tissue level. PMID:24572100

  6. Twitter earthquake detection: Earthquake monitoring in a social world

    Science.gov (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  7. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  8. A Framework for Treating Cumulative Trauma with Art Therapy

    Science.gov (United States)

    Naff, Kristina

    2014-01-01

    Cumulative trauma is relatively undocumented in art therapy practice, although there is growing evidence that art therapy provides distinct benefits for resolving various traumas. This qualitative study proposes an art therapy treatment framework for cumulative trauma derived from semi-structured interviews with three art therapists and artistic…

  9. Cumulative effects of forest management activities: how might they occur?

    Science.gov (United States)

    R. M. Rice; R. B. Thomas

    1985-01-01

    Concerns are often voiced about possible environmental damage as the result of the cumulative sedimentation effects of logging and forest road construction. In response to these concerns, National Forests are developing procedures to reduce the possibility that their activities may lead to unacceptable cumulative effects

  10. Cumulative effect in multiple production processes on nuclei

    International Nuclear Information System (INIS)

    Golubyatnikova, E.S.; Shmonin, V.L.; Kalinkin, B.N.

    1989-01-01

    It is shown that the cumulative effect is a natural result of the process of hadron multiple production in nuclear reactions. Interpretation is made of the universality of slopes of inclusive spectra and other characteristics of cumulative hadrons. The character of information from such reactions is discussed, which could be helpful in studying the mechanism of multiparticle production. 27 refs.; 4 figs

  11. Cumulative particle production in the quark recombination model

    International Nuclear Information System (INIS)

    Gavrilov, V.B.; Leksin, G.A.

    1987-01-01

    Production of cumulative particles in hadron-nuclear inteactions at high energies is considered within the framework of recombination quark model. Predictions for inclusive cross sections of production of cumulative particles and different resonances containing quarks in s state are made

  12. High cumulants of conserved charges and their statistical uncertainties

    Science.gov (United States)

    Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu

    2017-10-01

    We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)

  13. Centrality in earthquake multiplex networks

    Science.gov (United States)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  14. Earthquake Triggering in the September 2017 Mexican Earthquake Sequence

    Science.gov (United States)

    Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.

    2017-12-01

    Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress

  15. Towards Greenland Glaciation: cumulative or abrupt transition?

    Science.gov (United States)

    Ramstein, Gilles; Tan, Ning; Ladant, Jean-baptiste; Dumas, Christophe; Contoux, Camille

    2017-04-01

    During the mid-Pliocene warming period (3-3.3 Ma BP), the global annual mean temperatures inferred by data and model studies were 2-3° warmer than pre-industrial values. Accordingly, Greenland ice sheet volume is supposed to reach at the most, only half of that of present-day [Haywood et al. 2010]. Around 2.7-2.6 Ma BP, just ˜ 500 kyr after the warming peak of mid-Pliocene, the Greenland ice sheet has reached its full size [Lunt et al. 2008]. A crucial question concerns the evolution of the Greenland ice sheet from half to full size during the 3 - 2.5 Ma period. Data show a decreasing trend of atmospheric CO2 concentration from 3 Ma to 2.5 Ma [Seki et al.2010; Bartoli et al. 2011; Martinez et al. 2015]. However, a recent study [Contoux et al. 2015] suggests that a lowering of CO2 is not sufficient to initiate a perennial glaciation on Greenland and must be combined with low summer insolation to preserve the ice sheet during insolation maxima. This suggests rather a cumulative process than an abrupt event. In order to diagnose the evolution of the ice sheet build-up, we carry on, for the first time, a transient simulation of climate and ice sheet evolutions from 3 Ma to 2.5 Ma. This strategy enables us to investigate the waxing and waning of the ice sheet during several orbital cycles. We use a tri-dimensional interpolation method designed by Ladant et al. (2014), which allows the evolution of CO2 concentration and of orbital parameters, and the evolution of the Greenland ice sheet size to be taken into account. By interpolating climatic snapshot simulations ran with various possible combinations of CO2, orbits and ice sheet sizes, we can build a continuous climatic forcing that is then used to provide 500 kyrs-long ice sheet simulations. With such a tool, we may offer a physically based answer to different CO2 reconstructions scenarios and analyse which one is the most consistent with Greenland ice sheet buildup.

  16. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Science.gov (United States)

    Gao, Meng-Tan; Jin, Xue-Shen; An, Wei-Ping; Lü, Xiao-Jian

    2004-07-01

    The geography information system of the 1303 Hongton M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studies. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  17. Earthquake data base for Romania

    International Nuclear Information System (INIS)

    Rizescu, M.; Ghica, D.; Grecu, B.; Popa, M.; Borcia, I. S.

    2002-01-01

    A new earthquake database for Romania is being constructed, comprising complete earthquake information and being up-to-date, user-friendly and rapidly accessible. One main component of the database consists from the catalog of earthquakes occurred in Romania since 984 up to present. The catalog contains information related to locations and other source parameters, when available, and links to waveforms of important earthquakes. The other very important component is the 'strong motion database', developed for strong intermediate-depth Vrancea earthquakes where instrumental data were recorded. Different parameters to characterize strong motion properties as: effective peak acceleration, effective peak velocity, corner periods T c and T d , global response spectrum based intensities were computed and recorded into this database. Also, information on the recording seismic stations as: maps giving their positioning, photographs of the instruments and site conditions ('free-field or on buildings) are included. By the huge volume and quality of gathered data, also by its friendly user interface, the Romania earthquake data base provides a very useful tool for geosciences and civil engineering in their effort towards reducing seismic risk in Romania. (authors)

  18. Mapping Tectonic Stress Using Earthquakes

    International Nuclear Information System (INIS)

    Arnold, Richard; Townend, John; Vignaux, Tony

    2005-01-01

    An earthquakes occurs when the forces acting on a fault overcome its intrinsic strength and cause it to slip abruptly. Understanding more specifically why earthquakes occur at particular locations and times is complicated because in many cases we do not know what these forces actually are, or indeed what processes ultimately trigger slip. The goal of this study is to develop, test, and implement a Bayesian method of reliably determining tectonic stresses using the most abundant stress gauges available - earthquakes themselves.Existing algorithms produce reasonable estimates of the principal stress directions, but yield unreliable error bounds as a consequence of the generally weak constraint on stress imposed by any single earthquake, observational errors, and an unavoidable ambiguity between the fault normal and the slip vector.A statistical treatment of the problem can take into account observational errors, combine data from multiple earthquakes in a consistent manner, and provide realistic error bounds on the estimated principal stress directions.We have developed a realistic physical framework for modelling multiple earthquakes and show how the strong physical and geometrical constraints present in this problem allow inference to be made about the orientation of the principal axes of stress in the earth's crust

  19. Swedish earthquakes and acceleration probabilities

    International Nuclear Information System (INIS)

    Slunga, R.

    1979-03-01

    A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)

  20. Building with Earthquakes in Mind

    Science.gov (United States)

    Mangieri, Nicholas

    2016-04-01

    Earthquakes are some of the most elusive and destructive disasters humans interact with on this planet. Engineering structures to withstand earthquake shaking is critical to ensure minimal loss of life and property. However, the majority of buildings today in non-traditional earthquake prone areas are not built to withstand this devastating force. Understanding basic earthquake engineering principles and the effect of limited resources helps students grasp the challenge that lies ahead. The solution can be found in retrofitting existing buildings with proper reinforcements and designs to deal with this deadly disaster. The students were challenged in this project to construct a basic structure, using limited resources, that could withstand a simulated tremor through the use of an earthquake shake table. Groups of students had to work together to creatively manage their resources and ideas to design the most feasible and realistic type of building. This activity provided a wealth of opportunities for the students to learn more about a type of disaster they do not experience in this part of the country. Due to the fact that most buildings in New York City were not designed to withstand earthquake shaking, the students were able to gain an appreciation for how difficult it would be to prepare every structure in the city for this type of event.

  1. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  2. Earthquake damage to underground facilities

    International Nuclear Information System (INIS)

    Pratt, H.R.; Hustrulid, W.A.; Stephenson, D.E.

    1978-11-01

    The potential seismic risk for an underground nuclear waste repository will be one of the considerations in evaluating its ultimate location. However, the risk to subsurface facilities cannot be judged by applying intensity ratings derived from the surface effects of an earthquake. A literature review and analysis were performed to document the damage and non-damage due to earthquakes to underground facilities. Damage from earthquakes to tunnels, s, and wells and damage (rock bursts) from mining operations were investigated. Damage from documented nuclear events was also included in the study where applicable. There are very few data on damage in the subsurface due to earthquakes. This fact itself attests to the lessened effect of earthquakes in the subsurface because mines exist in areas where strong earthquakes have done extensive surface damage. More damage is reported in shallow tunnels near the surface than in deep mines. In mines and tunnels, large displacements occur primarily along pre-existing faults and fractures or at the surface entrance to these facilities.Data indicate vertical structures such as wells and shafts are less susceptible to damage than surface facilities. More analysis is required before seismic criteria can be formulated for the siting of a nuclear waste repository

  3. Global earthquake fatalities and population

    Science.gov (United States)

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  4. An Analysis of Cumulative Risks Indicated by Biomonitoring Data of Six Phthalates Using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single component of a chemical mixture drives the cumulative risk of a receptor.1 This study used the MCR, the Hazard Index (HI) and Hazard Quotient (HQ) to evaluate co-exposures to six phthalates using biomonito...

  5. An analysis of cumulative risks based on biomonitoring data for six phthalates using the Maximum Cumulative Ratio

    Science.gov (United States)

    The Maximum Cumulative Ratio (MCR) quantifies the degree to which a single chemical drives the cumulative risk of an individual exposed to multiple chemicals. Phthalates are a class of chemicals with ubiquitous exposures in the general population that have the potential to cause ...

  6. Twitter earthquake detection: earthquake monitoring in a social world

    Directory of Open Access Journals (Sweden)

    Daniel C. Bowden

    2011-06-01

    Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  7. The effects of spatially varying earthquake impacts on mood and anxiety symptom treatments among long-term Christchurch residents following the 2010/11 Canterbury earthquakes, New Zealand.

    Science.gov (United States)

    Hogg, Daniel; Kingham, Simon; Wilson, Thomas M; Ardagh, Michael

    2016-09-01

    This study investigates the effects of disruptions to different community environments, community resilience and cumulated felt earthquake intensities on yearly mood and anxiety symptom treatments from the New Zealand Ministry of Health's administrative databases between September 2009 and August 2012. The sample includes 172,284 long-term residents from different Christchurch communities. Living in a better physical environment was associated with lower mood and anxiety treatment rates after the beginning of the Canterbury earthquake sequence whereas an inverse effect could be found for social community environment and community resilience. These results may be confounded by pre-existing patterns, as well as intensified treatment-seeking behaviour and intervention programmes in severely affected areas. Nevertheless, the findings indicate that adverse mental health outcomes can be found in communities with worse physical but stronger social environments or community resilience post-disaster. Also, they do not necessarily follow felt intensities since cumulative earthquake intensity did not show a significant effect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Evidence for Ancient Mesoamerican Earthquakes

    Science.gov (United States)

    Kovach, R. L.; Garcia, B.

    2001-12-01

    Evidence for past earthquake damage at Mesoamerican ruins is often overlooked because of the invasive effects of tropical vegetation and is usually not considered as a casual factor when restoration and reconstruction of many archaeological sites are undertaken. Yet the proximity of many ruins to zones of seismic activity would argue otherwise. Clues as to the types of damage which should be soughtwere offered in September 1999 when the M = 7.5 Oaxaca earthquake struck the ruins of Monte Alban, Mexico, where archaeological renovations were underway. More than 20 structures were damaged, 5 of them seriously. Damage features noted were walls out of plumb, fractures in walls, floors, basal platforms and tableros, toppling of columns, and deformation, settling and tumbling of walls. A Modified Mercalli Intensity of VII (ground accelerations 18-34 %b) occurred at the site. Within the diffuse landward extension of the Caribbean plate boundary zone M = 7+ earthquakes occur with repeat times of hundreds of years arguing that many Maya sites were subjected to earthquakes. Damage to re-erected and reinforced stelae, walls, and buildings were witnessed at Quirigua, Guatemala, during an expedition underway when then 1976 M = 7.5 Guatemala earthquake on the Motagua fault struck. Excavations also revealed evidence (domestic pttery vessels and skeleton of a child crushed under fallen walls) of an ancient earthquake occurring about the teim of the demise and abandonment of Quirigua in the late 9th century. Striking evidence for sudden earthquake building collapse at the end of the Mayan Classic Period ~A.D. 889 was found at Benque Viejo (Xunantunich), Belize, located 210 north of Quirigua. It is argued that a M = 7.5 to 7.9 earthquake at the end of the Maya Classic period centered in the vicinity of the Chixoy-Polochic and Motagua fault zones cound have produced the contemporaneous earthquake damage to the above sites. As a consequences this earthquake may have accelerated the

  9. Cumulative stress and autonomic dysregulation in a community sample.

    Science.gov (United States)

    Lampert, Rachel; Tuit, Keri; Hong, Kwang-Ik; Donovan, Theresa; Lee, Forrester; Sinha, Rajita

    2016-05-01

    Whether cumulative stress, including both chronic stress and adverse life events, is associated with decreased heart rate variability (HRV), a non-invasive measure of autonomic status which predicts poor cardiovascular outcomes, is unknown. Healthy community dwelling volunteers (N = 157, mean age 29 years) participated in the Cumulative Stress/Adversity Interview (CAI), a 140-item event interview measuring cumulative adversity including major life events, life trauma, recent life events and chronic stressors, and underwent 24-h ambulatory ECG monitoring. HRV was analyzed in the frequency domain and standard deviation of NN intervals (SDNN) calculated. Initial simple regression analyses revealed that total cumulative stress score, chronic stressors and cumulative adverse life events (CALE) were all inversely associated with ultra low-frequency (ULF), very low-frequency (VLF) and low-frequency (LF) power and SDNN (all p accounting for additional appreciable variance. For VLF and LF, both total cumulative stress and chronic stress significantly contributed to the variance alone but were not longer significant after adjusting for race and health behaviors. In summary, total cumulative stress, and its components of adverse life events and chronic stress were associated with decreased cardiac autonomic function as measured by HRV. Findings suggest one potential mechanism by which stress may exert adverse effects on mortality in healthy individuals. Primary preventive strategies including stress management may prove beneficial.

  10. Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.

    Science.gov (United States)

    Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi

    2012-01-01

    Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.

  11. Case Study of Local Damage Indicators for a 2-Bay, 6-Storey RC-Frame subject to Earthquakes

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning

    1997-01-01

    A simulation study of a 2-bay, 6storey model test RC-frame(scale 1:5) subject to earthquakes is considered in this paper. Based on measured (simulated) storey accelerations and ground surface accelerations several indices for the storey damage, including interstorey drift, flexural damage ratios......, normalized cumulative dissipated energy, Park and Ang's indicator, a low-cycle fatigue damage index and a recently proposed local softening damage index estimated from time-varying eigenfrequencies are used to evaluate the damage state of the structure after the earthquake. Storey displacements are obtained...

  12. Case Study of Local Damage Indicators for a 2-Bay, 6-Storey RC-Frame subject to Earthquakes

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Nielsen, Søren R. K.; Kirkegaard, Poul Henning

    A simulation study of a 2-bay, 6storey model test RC-frame(scale 1:5) subject to earthquakes is considered in this paper. Based on measured (simulated) storey accelerations and ground surface accelerations several indices for the storey damage, including interstorey drift, flexural damage ratios......, normalized cumulative dissipated energy, Park and Ang's indicator, a low-cycle fatigue damage index and a recently proposed local softening damage index estimated from time-varying eigenfrequencies are used to evaluate the damage state of the structure after the earthquake. Storey displacements are obtained...

  13. Do earthquakes exhibit self-organized criticality?

    International Nuclear Information System (INIS)

    Yang Xiaosong; Ma Jin; Du Shuming

    2004-01-01

    If earthquakes are phenomena of self-organized criticality (SOC), statistical characteristics of the earthquake time series should be invariant after the sequence of events in an earthquake catalog are randomly rearranged. In this Letter we argue that earthquakes are unlikely phenomena of SOC because our analysis of the Southern California Earthquake Catalog shows that the first-return-time probability P M (T) is apparently changed after the time series is rearranged. This suggests that the SOC theory should not be used to oppose the efforts of earthquake prediction

  14. Earthquake, GIS and multimedia. The 1883 Casamicciola earthquake

    Directory of Open Access Journals (Sweden)

    M. Rebuffat

    1995-06-01

    Full Text Available A series of multimedia monographs concerning the main seismic events that have affected the Italian territory are in the process of being produced for the Documental Integrated Multimedia Project (DIMP started by the Italian National Seismic Survey (NSS. The purpose of the project is to reconstruct the historical record of earthquakes and promote an earthquake public education. Producing the monographs. developed in ARC INFO and working in UNIX. involved designing a special filing and management methodology to integrate heterogeneous information (images, papers, cartographies, etc.. This paper describes the possibilities of a GIS (Geographic Information System in the filing and management of documental information. As an example we present the first monograph on the 1883 Casamicciola earthquake. on the island of Ischia (Campania, Italy. This earthquake is particularly interesting for the following reasons: I historical-cultural context (first destructive seismic event after the unification of Italy; 2 its features (volcanic earthquake; 3 the socioeconomic consequences caused at such an important seaside resort.

  15. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  16. Cumulants in perturbation expansions for non-equilibrium field theory

    International Nuclear Information System (INIS)

    Fauser, R.

    1995-11-01

    The formulation of perturbation expansions for a quantum field theory of strongly interacting systems in a general non-equilibrium state is discussed. Non-vanishing initial correlations are included in the formulation of the perturbation expansion in terms of cumulants. The cumulants are shown to be the suitable candidate for summing up the perturbation expansion. Also a linked-cluster theorem for the perturbation series with cumulants is presented. Finally a generating functional of the perturbation series with initial correlations is studied. We apply the methods to a simple model of a fermion-boson system. (orig.)

  17. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date....... It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk. METHODS: We argue that, whenever the disease or disorder of interest is influenced...

  18. Monitoring of the future strong Vrancea events by using the CN formal earthquake prediction algorithm

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Novikova, O.V.; Panza, G.F.; Radulian, M.

    2003-06-01

    The preparation process of the strong subcrustal events originating in Vrancea region, Romania, is monitored using an intermediate-term medium-range earthquake prediction method - the CN algorithm (Keilis-Borok and Rotwain, 1990). We present the results of the monitoring of the preparation of future strong earthquakes for the time interval from January 1, 1994 (1994.1.1), to January 1, 2003 (2003.1.1) using the updated catalogue of the Romanian local network. The database considered for the CN monitoring of the preparation of future strong earthquakes in Vrancea covers the period from 1966.3.1 to 2003.1.1 and the geographical rectangle 44.8 deg - 48.4 deg N, 25.0 deg - 28.0 deg E. The algorithm correctly identifies, by retrospective prediction, the TJPs for all the three strong earthquakes (Mo=6.4) that occurred in Vrancea during this period. The cumulated duration of the TIPs represents 26.5% of the total period of time considered (1966.3.1-2003.1.1). The monitoring of current seismicity using the algorithm CN has been carried out since 1994. No strong earthquakes occurred from 1994.1.1 to 2003.1.1 but the CN declared an extended false alarm from 1999.5.1 to 2000.11.1. No alarm has currently been declared in the region (on January 1, 2003), as can be seen from the TJPs diagram shown. (author)

  19. Laboratory generated M -6 earthquakes

    Science.gov (United States)

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  20. The music of earthquakes and Earthquake Quartet #1

    Science.gov (United States)

    Michael, Andrew J.

    2013-01-01

    Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.

  1. Toward real-time regional earthquake simulation of Taiwan earthquakes

    Science.gov (United States)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  2. Book review: Earthquakes and water

    Science.gov (United States)

    Bekins, Barbara A.

    2012-01-01

    It is really nice to see assembled in one place a discussion of the documented and hypothesized hydrologic effects of earthquakes. The book is divided into chapters focusing on particular hydrologic phenomena including liquefaction, mud volcanism, stream discharge increases, groundwater level, temperature and chemical changes, and geyser period changes. These hydrologic effects are inherently fascinating, and the large number of relevant publications in the past decade makes this summary a useful milepost. The book also covers hydrologic precursors and earthquake triggering by pore pressure. A natural need to limit the topics covered resulted in the omission of tsunamis and the vast literature on the role of fluids and pore pressure in frictional strength of faults. Regardless of whether research on earthquake-triggered hydrologic effects ultimately provides insight into the physics of earthquakes, the text provides welcome common ground for interdisciplinary collaborations between hydrologists and seismologists. Such collaborations continue to be crucial for investigating hypotheses about the role of fluids in earthquakes and slow slip. 

  3. Cumulative Environmental Impacts: Science and Policy to Protect Communities.

    Science.gov (United States)

    Solomon, Gina M; Morello-Frosch, Rachel; Zeise, Lauren; Faust, John B

    2016-01-01

    Many communities are located near multiple sources of pollution, including current and former industrial sites, major roadways, and agricultural operations. Populations in such locations are predominantly low-income, with a large percentage of minorities and non-English speakers. These communities face challenges that can affect the health of their residents, including limited access to health care, a shortage of grocery stores, poor housing quality, and a lack of parks and open spaces. Environmental exposures may interact with social stressors, thereby worsening health outcomes. Age, genetic characteristics, and preexisting health conditions increase the risk of adverse health effects from exposure to pollutants. There are existing approaches for characterizing cumulative exposures, cumulative risks, and cumulative health impacts. Although such approaches have merit, they also have significant constraints. New developments in exposure monitoring, mapping, toxicology, and epidemiology, especially when informed by community participation, have the potential to advance the science on cumulative impacts and to improve decision making.

  4. Pesticide Cumulative Risk Assessment: Framework for Screening Analysis

    Science.gov (United States)

    This document provides guidance on how to screen groups of pesticides for cumulative evaluation using a two-step approach: begin with evaluation of available toxicological information and, if necessary, follow up with a risk-based screening approach.

  5. Online Scheduling in Manufacturing A Cumulative Delay Approach

    CERN Document Server

    Suwa, Haruhiko

    2013-01-01

    Online scheduling is recognized as the crucial decision-making process of production control at a phase of “being in production" according to the released shop floor schedule. Online scheduling can be also considered as one of key enablers to realize prompt capable-to-promise as well as available-to-promise to customers along with reducing production lead times under recent globalized competitive markets. Online Scheduling in Manufacturing introduces new approaches to online scheduling based on a concept of cumulative delay. The cumulative delay is regarded as consolidated information of uncertainties under a dynamic environment in manufacturing and can be collected constantly without much effort at any points in time during a schedule execution. In this approach, the cumulative delay of the schedule has the important role of a criterion for making a decision whether or not a schedule revision is carried out. The cumulative delay approach to trigger schedule revisions has the following capabilities for the ...

  6. Considering Environmental and Occupational Stressors in Cumulative Risk Assessments

    Science.gov (United States)

    While definitions vary across the global scientific community, cumulative risk assessments (CRAs) typically are described as exhibiting a population focus and analyzing the combined risks posed by multiple stressors. CRAs also may consider risk management alternatives as an anal...

  7. Peer tutors as learning and teaching partners: a cumulative ...

    African Journals Online (AJOL)

    ... paper explores the kinds of development in tutors' thinking and action that are possible when training and development is theoretically informed, coherent, and oriented towards improving practice. Keywords: academic development, academic literacies, cumulative learning, higher education, peer tutoring, writing centres.

  8. CTD Information Guide. Preventing Cumulative Trauma Disorders in the Workplace

    National Research Council Canada - National Science Library

    1992-01-01

    The purpose of this report is to provide Army occupational safety and health (OSH) professionals with a primer that explains the basic principles of ergonomic-hazard recognition for common cumulative trauma disorders...

  9. Cumulative radiation exposure in children with cystic fibrosis.

    LENUS (Irish Health Repository)

    O'Reilly, R

    2010-02-01

    This retrospective study calculated the cumulative radiation dose for children with cystic fibrosis (CF) attending a tertiary CF centre. Information on 77 children with a mean age of 9.5 years, a follow up time of 658 person years and 1757 studies including 1485 chest radiographs, 215 abdominal radiographs and 57 computed tomography (CT) scans, of which 51 were thoracic CT scans, were analysed. The average cumulative radiation dose was 6.2 (0.04-25) mSv per CF patient. Cumulative radiation dose increased with increasing age and number of CT scans and was greater in children who presented with meconium ileus. No correlation was identified between cumulative radiation dose and either lung function or patient microbiology cultures. Radiation carries a risk of malignancy and children are particularly susceptible. Every effort must be made to avoid unnecessary radiation exposure in these patients whose life expectancy is increasing.

  10. Cumulative query method for influenza surveillance using search engine data.

    Science.gov (United States)

    Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il

    2014-12-16

    Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.

  11. Steps and pips in the history of the cumulative recorder.

    OpenAIRE

    Lattal, Kennon A

    2004-01-01

    From its inception in the 1930s until very recent times, the cumulative recorder was the most widely used measurement instrument in the experimental analysis of behavior. It was an essential instrument in the discovery and analysis of schedules of reinforcement, providing the first real-time analysis of operant response rates and patterns. This review traces the evolution of the cumulative recorder from Skinner's early modified kymographs through various models developed by Skinner and his co...

  12. Mapping Cumulative Impacts of Human Activities on Marine Ecosystems

    OpenAIRE

    , Seaplan

    2018-01-01

    Given the diversity of human uses and natural resources that converge in coastal waters, the potential independent and cumulative impacts of those uses on marine ecosystems are important to consider during ocean planning. This study was designed to support the development and implementation of the 2009 Massachusetts Ocean Management Plan. Its goal was to estimate and visualize the cumulative impacts of human activities on coastal and marine ecosystems in the state and federal waters off of Ma...

  13. Global Earthquake Hazard Frequency and Distribution

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Earthquake Hazard Frequency and Distribution is a 2.5 minute grid utilizing Advanced National Seismic System (ANSS) Earthquake Catalog data of actual...

  14. Unbonded Prestressed Columns for Earthquake Resistance

    Science.gov (United States)

    2012-05-01

    Modern structures are able to survive significant shaking caused by earthquakes. By implementing unbonded post-tensioned tendons in bridge columns, the damage caused by an earthquake can be significantly lower than that of a standard reinforced concr...

  15. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  16. Calculation of displacements on fractures intersecting canisters induced by earthquakes: Aberg, Beberg and Ceberg examples

    Energy Technology Data Exchange (ETDEWEB)

    LaPointe, P.R.; Cladouhos, T. [Golder Associates Inc. (Sweden); Follin, S. [Golder Grundteknik KB (Sweden)

    1999-01-01

    -wide earthquake source parameter database upon which the relations between surface rupture length, subsurface fault displacement and fault width (depth for vertical faults) is representative of Swedish earthquakes. Results of the calculations are presented in several ways. A canister is considered to be damaged or to have failed if a fracture intersecting the canister has an instantaneous or cumulative slip greater than 0.1m. Canisters may fail during a single earthquake, or due to the cumulative effects of multiple smaller earthquakes. Failure percentages for single earthquakes for a 100,000-year period range from a high of 0.59% for Aberg to a low of 0.03% for Ceberg. Failure for cumulative effects only vary from 0.056% for Aberg to 0.004% for Ceberg. Additional investigation of the single earthquakes that cause unacceptable slippage suggests that their probability of occurrence over a 100,000 year time period is very low, but that their consequences are more severe in that they tend to damage multiple canisters. When a damaging earthquake occurs, an average of from 0.4% to 1.8% of the canisters experience induced slips greater than 0.1m, the higher number representative of Aberg, and the lower value representative of Ceberg. Although earthquakes were simulated at distances over 100 km from the canister positions, single earthquakes that produced displacements greater than 0.1 m were confined to the immediate vicinity of the repository. A plot for the Ceberg simulations shows that over 95% of the single, damaging earthquakes are within I km of the canister that they damage, and 99% are within 2.5 km. The maximum distance for the simulations was approximately 31 km. This suggests that the vast majority of faults that might potentially produce damaging earthquakes lie with a few kilometers of the repository. The simulations suggest that faults tens or hundreds of kilometers distant from the canisters are very unlikely to produce damage due to single earthquake events 39 refs, 36

  17. PRECURSORS OF EARTHQUAKES: VLF SIGNALSIONOSPHERE IONOSPHERE RELATION

    Directory of Open Access Journals (Sweden)

    Mustafa ULAS

    2013-01-01

    Full Text Available lot of people have died because of earthquakes every year. Therefore It is crucial to predict the time of the earthquakes reasonable time before it had happed. This paper presents recent information published in the literature about precursors of earthquakes. The relationships between earthquakes and ionosphere are targeted to guide new researches in order to study further to find novel prediction methods.

  18. EARTHQUAKE RESEARCH PROBLEMS OF NUCLEAR POWER GENERATORS

    Energy Technology Data Exchange (ETDEWEB)

    Housner, G. W.; Hudson, D. E.

    1963-10-15

    Earthquake problems associated with the construction of nuclear power generators require a more extensive and a more precise knowledge of earthquake characteristics and the dynamic behavior of structures than was considered necessary for ordinary buildings. Economic considerations indicate the desirability of additional research on the problems of earthquakes and nuclear reactors. The nature of these earthquake-resistant design problems is discussed and programs of research are recommended. (auth)

  19. Fault geometry and earthquake mechanics

    Directory of Open Access Journals (Sweden)

    D. J. Andrews

    1994-06-01

    Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the

  20. Historical earthquake investigations in Greece

    Directory of Open Access Journals (Sweden)

    K. Makropoulos

    2004-06-01

    Full Text Available The active tectonics of the area of Greece and its seismic activity have always been present in the country?s history. Many researchers, tempted to work on Greek historical earthquakes, have realized that this is a task not easily fulfilled. The existing catalogues of strong historical earthquakes are useful tools to perform general SHA studies. However, a variety of supporting datasets, non-uniformly distributed in space and time, need to be further investigated. In the present paper, a review of historical earthquake studies in Greece is attempted. The seismic history of the country is divided into four main periods. In each one of them, characteristic examples, studies and approaches are presented.

  1. Preliminary Results on Earthquake Recurrence Intervals, Rupture Segmentation, and Potential Earthquake Moment Magnitudes along the Tahoe-Sierra Frontal Fault Zone, Lake Tahoe, California

    Science.gov (United States)

    Howle, J.; Bawden, G. W.; Schweickert, R. A.; Hunter, L. E.; Rose, R.

    2012-12-01

    Utilizing high-resolution bare-earth LiDAR topography, field observations, and earlier results of Howle et al. (2012), we estimate latest Pleistocene/Holocene earthquake-recurrence intervals, propose scenarios for earthquake-rupture segmentation, and estimate potential earthquake moment magnitudes for the Tahoe-Sierra frontal fault zone (TSFFZ), west of Lake Tahoe, California. We have developed a new technique to estimate the vertical separation for the most recent and the previous ground-rupturing earthquakes at five sites along the Echo Peak and Mt. Tallac segments of the TSFFZ. At these sites are fault scarps with two bevels separated by an inflection point (compound fault scarps), indicating that the cumulative vertical separation (VS) across the scarp resulted from two events. This technique, modified from the modeling methods of Howle et al. (2012), uses the far-field plunge of the best-fit footwall vector and the fault-scarp morphology from high-resolution LiDAR profiles to estimate the per-event VS. From this data, we conclude that the adjacent and overlapping Echo Peak and Mt. Tallac segments have ruptured coseismically twice during the Holocene. The right-stepping, en echelon range-front segments of the TSFFZ show progressively greater VS rates and shorter earthquake-recurrence intervals from southeast to northwest. Our preliminary estimates suggest latest Pleistocene/ Holocene earthquake-recurrence intervals of 4.8±0.9x103 years for a coseismic rupture of the Echo Peak and Mt. Tallac segments, located at the southeastern end of the TSFFZ. For the Rubicon Peak segment, northwest of the Echo Peak and Mt. Tallac segments, our preliminary estimate of the maximum earthquake-recurrence interval is 2.8±1.0x103 years, based on data from two sites. The correspondence between high VS rates and short recurrence intervals suggests that earthquake sequences along the TSFFZ may initiate in the northwest part of the zone and then occur to the southeast with a lower

  2. Fault failure with moderate earthquakes

    Science.gov (United States)

    Johnston, M. J. S.; Linde, A. T.; Gladwin, M. T.; Borcherdt, R. D.

    1987-12-01

    High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.

  3. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  4. 13 CFR 120.174 - Earthquake hazards.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Earthquake hazards. 120.174... Applying to All Business Loans Requirements Imposed Under Other Laws and Orders § 120.174 Earthquake..., the construction must conform with the “National Earthquake Hazards Reduction Program (“NEHRP...

  5. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  6. Earthquake Education in Prime Time

    Science.gov (United States)

    de Groot, R.; Abbott, P.; Benthien, M.

    2004-12-01

    Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and

  7. Radon as an earthquake precursor

    International Nuclear Information System (INIS)

    Planinic, J.; Radolic, V.; Vukovic, B.

    2004-01-01

    Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors during a four-year period. Seismic activities, as well as barometric pressure, rainfall and air temperature were also observed. The influence of meteorological parameters on temporal radon variations was investigated, and a respective equation of the multiple regression was derived. The earthquakes with magnitude ≥3 at epicentral distances ≤200 km were recognized by means of radon anomaly. Empirical equations between earthquake magnitude, epicentral distance and precursor time were examined, and respective constants were determined

  8. Radon as an earthquake precursor

    Energy Technology Data Exchange (ETDEWEB)

    Planinic, J. E-mail: planinic@pedos.hr; Radolic, V.; Vukovic, B

    2004-09-11

    Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors during a four-year period. Seismic activities, as well as barometric pressure, rainfall and air temperature were also observed. The influence of meteorological parameters on temporal radon variations was investigated, and a respective equation of the multiple regression was derived. The earthquakes with magnitude {>=}3 at epicentral distances {<=}200 km were recognized by means of radon anomaly. Empirical equations between earthquake magnitude, epicentral distance and precursor time were examined, and respective constants were determined.

  9. Earthquake location in island arcs

    Science.gov (United States)

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  10. Dancing Earthquake Science Assists Recovery from the Christchurch Earthquakes

    Science.gov (United States)

    Egan, Candice J.; Quigley, Mark C.

    2015-01-01

    The 2010-2012 Christchurch (Canterbury) earthquakes in New Zealand caused loss of life and psychological distress in residents throughout the region. In 2011, student dancers of the Hagley Dance Company and dance professionals choreographed the performance "Move: A Seismic Journey" for the Christchurch Body Festival that explored…

  11. A Poisson method application to the assessment of the earthquake hazard in the North Anatolian Fault Zone, Turkey

    Energy Technology Data Exchange (ETDEWEB)

    Türker, Tuğba, E-mail: tturker@ktu.edu.tr [Karadeniz Technical University, Department of Geophysics, Trabzon/Turkey (Turkey); Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr [Ağrı İbrahim Çeçen University, Ağrı/Turkey (Turkey)

    2016-04-18

    North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson method the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M{sub S}=7.3 and 1897, M{sub S}=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M{sub S} magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9) %99

  12. A Poisson method application to the assessment of the earthquake hazard in the North Anatolian Fault Zone, Turkey

    International Nuclear Information System (INIS)

    Türker, Tuğba; Bayrak, Yusuf

    2016-01-01

    North Anatolian Fault (NAF) is one from the most important strike-slip fault zones in the world and located among regions in the highest seismic activity. The NAFZ observed very large earthquakes from the past to present. The aim of this study; the important parameters of Gutenberg-Richter relationship (a and b values) estimated and this parameters taking into account, earthquakes were examined in the between years 1900-2015 for 10 different seismic source regions in the NAFZ. After that estimated occurrence probabilities and return periods of occurring earthquakes in fault zone in the next years, and is being assessed with Poisson method the earthquake hazard of the NAFZ. The Region 2 were observed the largest earthquakes for the only historical period and hasn’t been observed large earthquake for the instrumental period in this region. Two historical earthquakes (1766, M_S=7.3 and 1897, M_S=7.0) are included for Region 2 (Marmara Region) where a large earthquake is expected in the next years. The 10 different seismic source regions are determined the relationships between the cumulative number-magnitude which estimated a and b parameters with the equation of LogN=a-bM in the Gutenberg-Richter. A homogenous earthquake catalog for M_S magnitude which is equal or larger than 4.0 is used for the time period between 1900 and 2015. The database of catalog used in the study has been created from International Seismological Center (ISC) and Boğazici University Kandilli observation and earthquake research institute (KOERI). The earthquake data were obtained until from 1900 to 1974 from KOERI and ISC until from 1974 to 2015 from KOERI. The probabilities of the earthquake occurring are estimated for the next 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 years in the 10 different seismic source regions. The highest earthquake occur probabilities in 10 different seismic source regions in the next years estimated that the region Tokat-Erzincan (Region 9) %99 with an earthquake

  13. Earthquake Warning Performance in Vallejo for the South Napa Earthquake

    Science.gov (United States)

    Wurman, G.; Price, M.

    2014-12-01

    In 2002 and 2003, Seismic Warning Systems, Inc. installed first-generation QuakeGuardTM earthquake warning devices at all eight fire stations in Vallejo, CA. These devices are designed to detect the P-wave of an earthquake and initiate predetermined protective actions if the impending shaking is estimated at approximately Modifed Mercalli Intensity V or greater. At the Vallejo fire stations the devices were set up to sound an audio alert over the public address system and to command the equipment bay doors to open. In August 2014, after more than 11 years of operating in the fire stations with no false alarms, the five units that were still in use triggered correctly on the MW 6.0 South Napa earthquake, less than 16 km away. The audio alert sounded in all five stations, providing fire fighters with 1.5 to 2.5 seconds of warning before the arrival of the S-wave, and the equipment bay doors opened in three of the stations. In one station the doors were disconnected from the QuakeGuard device, and another station lost power before the doors opened completely. These problems highlight just a small portion of the complexity associated with realizing actionable earthquake warnings. The issues experienced in this earthquake have already been addressed in subsequent QuakeGuard product generations, with downstream connection monitoring and backup power for critical systems. The fact that the fire fighters in Vallejo were afforded even two seconds of warning at these epicentral distances results from the design of the QuakeGuard devices, which focuses on rapid false positive rejection and ground motion estimates. We discuss the performance of the ground motion estimation algorithms, with an emphasis on the accuracy and timeliness of the estimates at close epicentral distances.

  14. Earthquake predictions using seismic velocity ratios

    Science.gov (United States)

    Sherburne, R. W.

    1979-01-01

    Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency. 

  15. Measuring the size of an earthquake

    Science.gov (United States)

    Spence, W.; Sipkin, S.A.; Choy, G.L.

    1989-01-01

    Earthquakes range broadly in size. A rock-burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat Island earthquake in the Aleutian arc involved a 650-kilometer length of the Earth's crust. Earthquakes can be even smaller and even larger. If an earthquake is felt or causes perceptible surface damage, then its intensity of shaking can be subjectively estimated. But many large earthquakes occur in oceanic areas or at great focal depths and are either simply not felt or their felt pattern does not really indicate their true size.

  16. Earthquakes-Rattling the Earth's Plumbing System

    Science.gov (United States)

    Sneed, Michelle; Galloway, Devin L.; Cunningham, William L.

    2003-01-01

    Hydrogeologic responses to earthquakes have been known for decades, and have occurred both close to, and thousands of miles from earthquake epicenters. Water wells have become turbid, dry or begun flowing, discharge of springs and ground water to streams has increased and new springs have formed, and well and surface-water quality have become degraded as a result of earthquakes. Earthquakes affect our Earth’s intricate plumbing system—whether you live near the notoriously active San Andreas Fault in California, or far from active faults in Florida, an earthquake near or far can affect you and the water resources you depend on.

  17. The impact of soil suction variation on earthquake intensity indices

    Directory of Open Access Journals (Sweden)

    Biglari Mahnoosh

    2016-01-01

    Full Text Available Soil properties can completely change the ground motion characteristics as they travel from the bedrock to the surface because, soil as a low-pass filter, may amplify or deamplify seismic motions in some frequencies on the wave travelling path. Recent studies about the advanced unsaturated soil mechanics clearly shows that dynamic properties of soils, including small-strain shear modulus (Gmax, shear modulus reduction (G/Gmax, and damping ratio (D curves are affected by changes in the soil suction level. The current study present nonlinear time-dependent analysis of three different unsaturated soils available in the literature with different ranges of nonlinear behaviour that earlier have been studied on unsaturated dynamic models. Since, the earthquake intensity parameters can be used to describe the damage potential of an earthquake, the focus of this paper is to evaluate the impact of the suction variation on the engineering ground motion parameters, including peak values of strong motion, Vmax/Amax, root-mean-square acceleration, Arias intensity, characteristic intensity, cumulative absolute velocity, acceleration spectrum intensity, effective design acceleration, A95 parameter and predominant period separately under the near-field and the far-field seismicity categories.

  18. Summary of earthquake experience database

    International Nuclear Information System (INIS)

    1999-01-01

    Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes

  19. Earthquake design for controlled structures

    Directory of Open Access Journals (Sweden)

    Nikos G. Pnevmatikos

    2017-04-01

    Full Text Available An alternative design philosophy, for structures equipped with control devices, capable to resist an expected earthquake while remaining in the elastic range, is described. The idea is that a portion of the earthquake loading is under¬taken by the control system and the remaining by the structure which is designed to resist elastically. The earthquake forces assuming elastic behavior (elastic forces and elastoplastic behavior (design forces are first calculated ac¬cording to the codes. The required control forces are calculated as the difference from elastic to design forces. The maximum value of capacity of control devices is then compared to the required control force. If the capacity of the control devices is larger than the required control force then the control devices are accepted and installed in the structure and the structure is designed according to the design forces. If the capacity is smaller than the required control force then a scale factor, α, reducing the elastic forces to new design forces is calculated. The structure is redesigned and devices are installed. The proposed procedure ensures that the structure behaves elastically (without damage for the expected earthquake at no additional cost, excluding that of buying and installing the control devices.

  20. Using Smartphones to Detect Earthquakes

    Science.gov (United States)

    Kong, Q.; Allen, R. M.

    2012-12-01

    We are using the accelerometers in smartphones to record earthquakes. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. Given the potential number of smartphones, and small separation of sensors, this new type of seismic dataset has significant potential provides that the signal can be separated from the noise. We developed an application for android phones to record the acceleration in real time. These records can be saved on the local phone or transmitted back to a server in real time. The accelerometers in the phones were evaluated by comparing performance with a high quality accelerometer while located on controlled shake tables for a variety of tests. The results show that the accelerometer in the smartphone can reproduce the characteristic of the shaking very well, even the phone left freely on the shake table. The nature of these datasets is also quite different from traditional networks due to the fact that smartphones are moving around with their owners. Therefore, we must distinguish earthquake signals from other daily use. In addition to the shake table tests that accumulated earthquake records, we also recorded different human activities such as running, walking, driving etc. An artificial neural network based approach was developed to distinguish these different records. It shows a 99.7% successful rate of distinguishing earthquakes from the other typical human activities in our database. We are now at the stage ready to develop the basic infrastructure for a smartphone seismic network.

  1. Explanation of earthquake response spectra

    OpenAIRE

    Douglas, John

    2017-01-01

    This is a set of five slides explaining how earthquake response spectra are derived from strong-motion records and simple models of structures and their purpose within seismic design and assessment. It dates from about 2002 and I have used it in various introductory lectures on engineering seismology.

  2. Maintenance hemodialysis patients have high cumulative radiation exposure.

    LENUS (Irish Health Repository)

    Kinsella, Sinead M

    2010-10-01

    Hemodialysis is associated with an increased risk of neoplasms which may result, at least in part, from exposure to ionizing radiation associated with frequent radiographic procedures. In order to estimate the average radiation exposure of those on hemodialysis, we conducted a retrospective study of 100 patients in a university-based dialysis unit followed for a median of 3.4 years. The number and type of radiological procedures were obtained from a central radiology database, and the cumulative effective radiation dose was calculated using standardized, procedure-specific radiation levels. The median annual radiation dose was 6.9 millisieverts (mSv) per patient-year. However, 14 patients had an annual cumulative effective radiation dose over 20 mSv, the upper averaged annual limit for occupational exposure. The median total cumulative effective radiation dose per patient over the study period was 21.7 mSv, in which 13 patients had a total cumulative effective radiation dose over 75 mSv, a value reported to be associated with a 7% increased risk of cancer-related mortality. Two-thirds of the total cumulative effective radiation dose was due to CT scanning. The average radiation exposure was significantly associated with the cause of end-stage renal disease, history of ischemic heart disease, transplant waitlist status, number of in-patient hospital days over follow-up, and death during the study period. These results highlight the substantial exposure to ionizing radiation in hemodialysis patients.

  3. Solar eruptions - soil radon - earthquakes

    International Nuclear Information System (INIS)

    Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.

    2004-01-01

    For the first time a new natural phenomenon was established: a contrasting increase in the soil radon level under the influence of solar flares. Such an increase is one of geochemical indicators of earthquakes. Most researchers consider this a phenomenon of exclusively terrestrial processes. Investigations regarding the link of earthquakes to solar activity carried out during the last decade in different countries are based on the analysis of statistical data ΣΕ (t) and W (t). As established, the overall seismicity of the Earth and its separate regions depends of an 11-year long cycle of solar activity. Data provided in the paper based on experimental studies serve the first step on the way of experimental data on revealing cause-and-reason solar-terrestrials bonds in a series s olar eruption-lithosphere radon-earthquakes . They need further collection of experimental data. For the first time, through radon constituent of terrestrial radiation objectification has been made of elementary lattice of the Hartmann's network contoured out by bio location method. As found out, radon concentration variations in Hartmann's network nodes determine the dynamics of solar-terrestrial relationships. Of the three types of rapidly running processes conditioned by solar-terrestrial bonds earthquakes are attributed to rapidly running destructive processes that occur in the most intense way at the juncture of tectonic massifs, along transformed and deep failures. The basic factors provoking the earthquakes are both magnetic-structural effects and a long-term (over 5 months) bombing of the surface of lithosphere by highly energetic particles of corpuscular solar flows, this being approved by photometry. As a result of solar flares that occurred from 29 October to 4 November 2003, a sharply contrasting increase in soil radon was established which is an earthquake indicator on the territory of Yerevan City. A month and a half later, earthquakes occurred in San-Francisco, Iran, Turkey

  4. Time-scale invariant changes in atmospheric radon concentration and crustal strain prior to a large earthquake

    Directory of Open Access Journals (Sweden)

    Y. Kawada

    2007-01-01

    Full Text Available Prior to large earthquakes (e.g. 1995 Kobe earthquake, Japan, an increase in the atmospheric radon concentration is observed, and this increase in the rate follows a power-law of the time-to-earthquake (time-to-failure. This phenomenon corresponds to the increase in the radon migration in crust and the exhalation into atmosphere. An irreversible thermodynamic model including time-scale invariance clarifies that the increases in the pressure of the advecting radon and permeability (hydraulic conductivity in the crustal rocks are caused by the temporal changes in the power-law of the crustal strain (or cumulative Benioff strain, which is associated with damage evolution such as microcracking or changing porosity. As the result, the radon flux and the atmospheric radon concentration can show a temporal power-law increase. The concentration of atmospheric radon can be used as a proxy for the seismic precursory processes associated with crustal dynamics.

  5. Cumulative Trauma Among Mayas Living in Southeast Florida.

    Science.gov (United States)

    Millender, Eugenia I; Lowe, John

    2017-06-01

    Mayas, having experienced genocide, exile, and severe poverty, are at high risk for the consequences of cumulative trauma that continually resurfaces through current fear of an uncertain future. Little is known about the mental health and alcohol use status of this population. This correlational study explored t/he relationship of cumulative trauma as it relates to social determinants of health (years in the United States, education, health insurance status, marital status, and employment), psychological health (depression symptoms), and health behaviors (alcohol use) of 102 Guatemalan Mayas living in Southeast Florida. The results of this study indicated that, as specific social determinants of health and cumulative trauma increased, depression symptoms (particularly among women) and the risk for harmful alcohol use (particularly among men) increased. Identifying risk factors at an early stage before serious disease or problems are manifest provides room for early screening leading to early identification, early treatment, and better outcomes.

  6. Session: What do we know about cumulative or population impacts

    Energy Technology Data Exchange (ETDEWEB)

    Kerlinger, Paul; Manville, Al; Kendall, Bill

    2004-09-01

    This session at the Wind Energy and Birds/Bats workshop consisted of a panel discussion followed by a discussion/question and answer period. The panelists were Paul Kerlinger, Curry and Kerlinger, LLC, Al Manville, U.S. Fish and Wildlife Service, and Bill Kendall, US Geological Service. The panel addressed the potential cumulative impacts of wind turbines on bird and bat populations over time. Panel members gave brief presentations that touched on what is currently known, what laws apply, and the usefulness of population modeling. Topics addressed included which sources of modeling should be included in cumulative impacts, comparison of impacts from different modes of energy generation, as well as what research is still needed regarding cumulative impacts of wind energy development on bird and bat populations.

  7. Estimating a population cumulative incidence under calendar time trends

    DEFF Research Database (Denmark)

    Hansen, Stefan N; Overgaard, Morten; Andersen, Per K

    2017-01-01

    BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date...... by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis. RESULTS: We show how a proportional hazards model may be used to extrapolate disease risk...... estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata...

  8. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  9. Baltic Sea biodiversity status vs. cumulative human pressures

    DEFF Research Database (Denmark)

    Andersen, Jesper H.; Halpern, Benjamin S.; Korpinen, Samuli

    2015-01-01

    Abstract Many studies have tried to explain spatial and temporal variations in biodiversity status of marine areas from a single-issue perspective, such as fishing pressure or coastal pollution, yet most continental seas experience a wide range of human pressures. Cumulative impact assessments have...... been developed to capture the consequences of multiple stressors for biodiversity, but the ability of these assessments to accurately predict biodiversity status has never been tested or ground-truthed. This relationship has similarly been assumed for the Baltic Sea, especially in areas with impaired...... status, but has also never been documented. Here we provide a first tentative indication that cumulative human impacts relate to ecosystem condition, i.e. biodiversity status, in the Baltic Sea. Thus, cumulative impact assessments offer a promising tool for informed marine spatial planning, designation...

  10. Cumulative carbon as a policy framework for achieving climate stabilization

    Science.gov (United States)

    Matthews, H. Damon; Solomon, Susan; Pierrehumbert, Raymond

    2012-01-01

    The primary objective of the United Nations Framework Convention on Climate Change is to stabilize greenhouse gas concentrations at a level that will avoid dangerous climate impacts. However, greenhouse gas concentration stabilization is an awkward framework within which to assess dangerous climate change on account of the significant lag between a given concentration level and the eventual equilibrium temperature change. By contrast, recent research has shown that global temperature change can be well described by a given cumulative carbon emissions budget. Here, we propose that cumulative carbon emissions represent an alternative framework that is applicable both as a tool for climate mitigation as well as for the assessment of potential climate impacts. We show first that both atmospheric CO2 concentration at a given year and the associated temperature change are generally associated with a unique cumulative carbon emissions budget that is largely independent of the emissions scenario. The rate of global temperature change can therefore be related to first order to the rate of increase of cumulative carbon emissions. However, transient warming over the next century will also be strongly affected by emissions of shorter lived forcing agents such as aerosols and methane. Non-CO2 emissions therefore contribute to uncertainty in the cumulative carbon budget associated with near-term temperature targets, and may suggest the need for a mitigation approach that considers separately short- and long-lived gas emissions. By contrast, long-term temperature change remains primarily associated with total cumulative carbon emissions owing to the much longer atmospheric residence time of CO2 relative to other major climate forcing agents. PMID:22869803

  11. The role of factorial cumulants in reactor neutron noise theory

    International Nuclear Information System (INIS)

    Colombino, A.; Pacilio, N.; Sena, G.

    1979-01-01

    The physical meaning and the combinatorial implications of the factorial cumulant of a state variable such as the number of neutrons or the number of neutron counts are specified. Features of the presentation are: (1) the fission process is treated in its entirety without the customary binary emission restriction, (b) the introduction of the factorial cumulants helps in reducing the complexity of the mathematical problems, (c) all the solutions can be obtained analytically. Only the ergodic hypothesis for the neutron population evolution is dealt with. (author)

  12. Super-Resolution Algorithm in Cumulative Virtual Blanking

    Science.gov (United States)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  13. Development of damage probability matrices based on Greek earthquake damage data

    Science.gov (United States)

    Eleftheriadou, Anastasia K.; Karabinis, Athanasios I.

    2011-03-01

    A comprehensive study is presented for empirical seismic vulnerability assessment of typical structural types, representative of the building stock of Southern Europe, based on a large set of damage statistics. The observational database was obtained from post-earthquake surveys carried out in the area struck by the September 7, 1999 Athens earthquake. After analysis of the collected observational data, a unified damage database has been created which comprises 180,945 damaged buildings from/after the near-field area of the earthquake. The damaged buildings are classified in specific structural types, according to the materials, seismic codes and construction techniques in Southern Europe. The seismic demand is described in terms of both the regional macroseismic intensity and the ratio α g/ a o, where α g is the maximum peak ground acceleration (PGA) of the earthquake event and a o is the unique value PGA that characterizes each municipality shown on the Greek hazard map. The relative and cumulative frequencies of the different damage states for each structural type and each intensity level are computed in terms of damage ratio. Damage probability matrices (DPMs) and vulnerability curves are obtained for specific structural types. A comparison analysis is fulfilled between the produced and the existing vulnerability models.

  14. Modeling earthquake sequences along the Manila subduction zone: Effects of three-dimensional fault geometry

    Science.gov (United States)

    Yu, Hongyu; Liu, Yajing; Yang, Hongfeng; Ning, Jieyuan

    2018-05-01

    To assess the potential of catastrophic megathrust earthquakes (MW > 8) along the Manila Trench, the eastern boundary of the South China Sea, we incorporate a 3D non-planar fault geometry in the framework of rate-state friction to simulate earthquake rupture sequences along the fault segment between 15°N-19°N of northern Luzon. Our simulation results demonstrate that the first-order fault geometry heterogeneity, the transitional-segment (possibly related to the subducting Scarborough seamount chain) connecting the steeper south segment and the flatter north segment, controls earthquake rupture behaviors. The strong along-strike curvature at the transitional-segment typically leads to partial ruptures of MW 8.3 and MW 7.8 along the southern and northern segments respectively. The entire fault occasionally ruptures in MW 8.8 events when the cumulative stress in the transitional-segment is sufficiently high to overcome the geometrical inhibition. Fault shear stress evolution, represented by the S-ratio, is clearly modulated by the width of seismogenic zone (W). At a constant plate convergence rate, a larger W indicates on average lower interseismic stress loading rate and longer rupture recurrence period, and could slow down or sometimes stop ruptures that initiated from a narrower portion. Moreover, the modeled interseismic slip rate before whole-fault rupture events is comparable with the coupling state that was inferred from the interplate seismicity distribution, suggesting the Manila trench could potentially rupture in a M8+ earthquake.

  15. Contribution of Satellite Gravimetry to Understanding Seismic Source Processes of the 2011 Tohoku-Oki Earthquake

    Science.gov (United States)

    Han, Shin-Chan; Sauber, Jeanne; Riva, Riccardo

    2011-01-01

    The 2011 great Tohoku-Oki earthquake, apart from shaking the ground, perturbed the motions of satellites orbiting some hundreds km away above the ground, such as GRACE, due to coseismic change in the gravity field. Significant changes in inter-satellite distance were observed after the earthquake. These unconventional satellite measurements were inverted to examine the earthquake source processes from a radically different perspective that complements the analyses of seismic and geodetic ground recordings. We found the average slip located up-dip of the hypocenter but within the lower crust, as characterized by a limited range of bulk and shear moduli. The GRACE data constrained a group of earthquake source parameters that yield increasing dip (7-16 degrees plus or minus 2 degrees) and, simultaneously, decreasing moment magnitude (9.17-9.02 plus or minus 0.04) with increasing source depth (15-24 kilometers). The GRACE solution includes the cumulative moment released over a month and demonstrates a unique view of the long-wavelength gravimetric response to all mass redistribution processes associated with the dynamic rupture and short-term postseismic mechanisms to improve our understanding of the physics of megathrusts.

  16. Revisiting Slow Slip Events Occurrence in Boso Peninsula, Japan, Combining GPS Data and Repeating Earthquakes Analysis

    Science.gov (United States)

    Gardonio, B.; Marsan, D.; Socquet, A.; Bouchon, M.; Jara, J.; Sun, Q.; Cotte, N.; Campillo, M.

    2018-02-01

    Slow slip events (SSEs) regularly occur near the Boso Peninsula, central Japan. Their time of recurrence has been decreasing from 6.4 to 2.2 years from 1996 to 2014. It is important to better constrain the slip history of this area, especially as models show that the recurrence intervals could become shorter prior to the occurrence of a large interplate earthquake nearby. We analyze the seismic waveforms of more than 2,900 events (M≥1.0) taking place in the Boso Peninsula, Japan, from 1 April 2004 to 4 November 2015, calculating the correlation and the coherence between each pair of events in order to define groups of repeating earthquakes. The cumulative number of repeating earthquakes suggests the existence of two slow slip events that have escaped detection so far. Small transient displacements observed in the time series of nearby GPS stations confirm these results. The detection scheme coupling repeating earthquakes and GPS analysis allow to detect small SSEs that were not seen before by classical methods. This work brings new information on the diversity of SSEs and demonstrates that the SSEs in Boso area present a more complex history than previously considered.

  17. Napa earthquake: An earthquake in a highly connected world

    Science.gov (United States)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.

    2014-12-01

    The Napa earthquake recently occurred close to Silicon Valley. This makes it a good candidate to study what social networks, wearable objects and website traffic analysis (flashsourcing) can tell us about the way eyewitnesses react to ground shaking. In the first part, we compare the ratio of people publishing tweets and with the ratio of people visiting EMSC (European Mediterranean Seismological Centre) real time information website in the first minutes following the earthquake occurrence to the results published by Jawbone, which show that the proportion of people waking up depends (naturally) on the epicentral distance. The key question to evaluate is whether the proportions of inhabitants tweeting or visiting the EMSC website are similar to the proportion of people waking up as shown by the Jawbone data. If so, this supports the premise that all methods provide a reliable image of the relative ratio of people waking up. The second part of the study focuses on the reaction time for both Twitter and EMSC website access. We show, similarly to what was demonstrated for the Mineral, Virginia, earthquake (Bossu et al., 2014), that hit times on the EMSC website follow the propagation of the P waves and that 2 minutes of website traffic is sufficient to determine the epicentral location of an earthquake on the other side of the Atlantic. We also compare with the publication time of messages on Twitter. Finally, we check whether the number of tweets and the number of visitors relative to the number of inhabitants is correlated to the local level of shaking. Together these results will tell us whether the reaction of eyewitnesses to ground shaking as observed through Twitter and the EMSC website analysis is tool specific (i.e. specific to Twitter or EMSC website) or whether they do reflect people's actual reactions.

  18. Countermeasures to earthquakes in nuclear plants

    International Nuclear Information System (INIS)

    Sato, Kazuhide

    1979-01-01

    The contribution of atomic energy to mankind is unmeasured, but the danger of radioactivity is a special thing. Therefore in the design of nuclear power plants, the safety has been regarded as important, and in Japan where earthquakes occur frequently, the countermeasures to earthquakes have been incorporated in the examination of safety naturally. The radioactive substances handled in nuclear power stations and spent fuel reprocessing plants are briefly explained. The occurrence of earthquakes cannot be predicted effectively, and the disaster due to earthquakes is apt to be remarkably large. In nuclear plants, the prevention of damage in the facilities and the maintenance of the functions are required at the time of earthquakes. Regarding the location of nuclear plants, the history of earthquakes, the possible magnitude of earthquakes, the properties of ground and the position of nuclear plants should be examined. After the place of installation has been decided, the earthquake used for design is selected, evaluating live faults and determining the standard earthquakes. As the fundamentals of aseismatic design, the classification according to importance, the earthquakes for design corresponding to the classes of importance, the combination of loads and allowable stress are explained. (Kako, I.)

  19. One Basin, One Stress Regime, One Orientation of Seismogenic Basement Faults, Variable Spatio-Temporal Slip Histories: Lessons from Fort Worth Basin Induced Earthquake Sequences

    Science.gov (United States)

    DeShon, H. R.; Brudzinski, M.; Frohlich, C.; Hayward, C.; Jeong, S.; Hornbach, M. J.; Magnani, M. B.; Ogwari, P.; Quinones, L.; Scales, M. M.; Stump, B. W.; Sufri, O.; Walter, J. I.

    2017-12-01

    Since October 2008, the Fort Worth basin in north Texas has experienced over 30 magnitude (M) 3.0+ earthquakes, including one M4.0. Five named earthquake sequences have been recorded by local seismic networks: DFW Airport, Cleburne-Johnson County, Azle, Irving-Dallas, and Venus-Johnson County. Earthquakes have occurred on northeast (NE)-southwest (SW) trending Precambrian basement faults and within the overlying Ellenburger limestone unit used for wastewater disposal. Focal mechanisms indicate primarily normal faulting, and stress inversions indicate maximum regional horizontal stress strikes 20-30° NE. The seismogenic sections of the faults in either the basement or within the Ellenburger appear optimally oriented for failure within the modern stress regime. Stress drop estimates range from 10 to 75 bars, with little variability between and within the named sequences, and the values are consistent with intraplate earthquake stress drops in natural tectonic settings. However, the spatio-temporal history of each sequence relative to wastewater injection data varies. The May 2015 M4.0 Venus earthquake, for example, is only the largest of what is nearly 10 years of earthquake activity on a single fault structure. Here, maximum earthquake size has increased with time and exhibits a log-linear relationship to cumulative injected volume from 5 nearby wells. At the DFW airport, where the causative well was shut-in within a few months of the initial earthquakes and soon after the well began operation, we document migration away from the injector on the same fault for nearly 6 km sporadically over 5 years. The Irving-Dallas and Azle sequences, like DFW airport, appear to have started rather abruptly with just a few small magnitude earthquakes in the weeks or months preceding the significant set of magnitude 3.5+ earthquakes associated with each sequence. There are no nearby (<10 km) injection operations to the Irving-Dallas sequence and the Azle linked wells operated for

  20. Update earthquake risk assessment in Cairo, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  1. Evaluation of earthquake vibration on aseismic design of nuclear power plant judging from recent earthquakes

    International Nuclear Information System (INIS)

    Dan, Kazuo

    2006-01-01

    The Regulatory Guide for Aseismic Design of Nuclear Reactor Facilities was revised on 19 th September, 2006. Six factors for evaluation of earthquake vibration are considered on the basis of the recent earthquakes. They are 1) evaluation of earthquake vibration by method using fault model, 2) investigation and approval of active fault, 3) direct hit earthquake, 4) assumption of the short active fault as the hypocentral fault, 5) locality of the earthquake and the earthquake vibration and 6) remaining risk. A guiding principle of revision required new evaluation method of earthquake vibration using fault model, and evaluation of probability of earthquake vibration. The remaining risk means the facilities and people get into danger when stronger earthquake than the design occurred, accordingly, the scattering has to be considered at evaluation of earthquake vibration. The earthquake belt of Hyogo-Nanbu earthquake and strong vibration pulse in 1995, relation between length of surface earthquake fault and hypocentral fault, and distribution of seismic intensity of off Kushiro in 1993 are shown. (S.Y.)

  2. A smartphone application for earthquakes that matter!

    Science.gov (United States)

    Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert

    2014-05-01

    Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public, some of them having been downloaded more than 1 million times! The advantages are obvious: wherever someone's own location is, they can be automatically informed when an earthquake has struck. Just by setting a magnitude threshold and an area of interest, there is no longer the need to browse the internet as the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? What are the earthquakes that really matters to laypeople? One clue may be derived from some newspaper reports that show that a while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones that matter the most for the public (and authorities). They are the ones of societal importance even when of small magnitude. A smartphone application developed by EMSC (Euro-Med Seismological Centre) with the financial support of the Fondation MAIF aims at providing suitable notifications for earthquakes by collating different information threads covering tsunamigenic, potentially damaging and felt earthquakes. Tsunamigenic earthquakes are considered here to be those ones that are the subject of alert or information messages from the PTWC (Pacific Tsunami Warning Centre). While potentially damaging earthquakes are identified through an automated system called EQIA (Earthquake Qualitative Impact Assessment) developed and operated at EMSC. This rapidly assesses earthquake impact by comparing the population exposed to each expected

  3. Cumulative effects of wind turbines. Volume 3: Report on results of consultations on cumulative effects of wind turbines on birds

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    This report gives details of the consultations held in developing the consensus approach taken in assessing the cumulative effects of wind turbines. Contributions on bird issues, and views of stakeholders, the Countryside Council for Wales, electric utilities, Scottish Natural Heritage, and the National Wind Power Association are reported. The scoping of key species groups, where cumulative effects might be expected, consideration of other developments, the significance of any adverse effects, mitigation, regional capacity assessments, and predictive models are discussed. Topics considered at two stakeholder workshops are outlined in the appendices.

  4. Cumulative impacts: current research and current opinions at PSW

    Science.gov (United States)

    R. M. Rice

    1987-01-01

    Consideration of cumulative watershed effects (CWEs) has both political and physical aspects. Regardless of the practical usefulness of present methods of dealing with CWEs, the legal requirement to address them remains. Management of federal land is regulated by the National Environmental Policy Act (NEPA) and the Federal Water Pollution Control Act of 1972. The...

  5. Cumulative Risks of Foster Care Placement for Danish Children

    DEFF Research Database (Denmark)

    Fallesen, Peter; Emanuel, Natalia; Wildeman, Christopher

    2014-01-01

    children. Our results also show some variations by parental ethnicity and sex, but these differences are small. Indeed, they appear quite muted relative to racial/ethnic differences in these risks in the United States. Last, though cumulative risks are similar between Danish and American children...

  6. Disintegration of a profiled shock wave at the cumulation point

    International Nuclear Information System (INIS)

    Kaliski, S.

    1978-01-01

    The disintegration at the cumulation point is analyzed of a shock wave generated with the aid of a profiled pressure. The quantitative relations are analyzed for the disintegration waves for typical compression parameters in systems of thermonuclear microfusion. The quantitative conclusions are drawn for the application of simplifying approximate calculations in problems of microfusion. (author)

  7. Cumulative Prospect Theory, Option Returns, and the Variance Premium

    NARCIS (Netherlands)

    Baele, Lieven; Driessen, Joost; Ebert, Sebastian; Londono Yarce, J.M.; Spalt, Oliver

    The variance premium and the pricing of out-of-the-money (OTM) equity index options are major challenges to standard asset pricing models. We develop a tractable equilibrium model with Cumulative Prospect Theory (CPT) preferences that can overcome both challenges. The key insight is that the

  8. Steps and Pips in the History of the Cumulative Recorder

    Science.gov (United States)

    Lattal, Kennon A.

    2004-01-01

    From its inception in the 1930s until very recent times, the cumulative recorder was the most widely used measurement instrument in the experimental analysis of behavior. It was an essential instrument in the discovery and analysis of schedules of reinforcement, providing the first real-time analysis of operant response rates and patterns. This…

  9. The effects of cumulative practice on mathematics problem solving.

    Science.gov (United States)

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  10. Anti-irritants II: Efficacy against cumulative irritation

    DEFF Research Database (Denmark)

    Andersen, Flemming; Hedegaard, Kathryn; Petersen, Thomas Kongstad

    2006-01-01

    window of opportunity in which to demonstrate efficacy. Therefore, the effect of AI was studied in a cumulative irritation model by inducing irritant dermatitis with 10 min daily exposures for 5+4 days (no irritation on weekend) to 1% sodium lauryl sulfate on the right and 20% nonanoic acid on the left...

  11. Cumulative Beam Breakup with Time-Dependent Parameters

    CERN Document Server

    Delayen, J R

    2004-01-01

    A general analytical formalism developed recently for cumulative beam breakup (BBU) in linear accelerators with arbitrary beam current profile and misalignments [1] is extended to include time-dependent parameters such as energy chirp or rf focusing in order to reduce BBU-induced instabilities and emittance growth. Analytical results are presented and applied to practical accelerator configurations.

  12. On the mechanism of hadron cumulative production on nucleus

    International Nuclear Information System (INIS)

    Efremov, A.V.

    1976-01-01

    A mechanism of cumulative production of hadrons on nucleus is proposed which is similar to that of high perpendicular hadron production. The cross section obtained describes the main qualitative features of such prosesses, e.g., initial energy dependence atomic number behaviour, dependence on the rest mass of the produced particle and its production angle

  13. Hyperscaling breakdown and Ising spin glasses: The Binder cumulant

    Science.gov (United States)

    Lundow, P. H.; Campbell, I. A.

    2018-02-01

    Among the Renormalization Group Theory scaling rules relating critical exponents, there are hyperscaling rules involving the dimension of the system. It is well known that in Ising models hyperscaling breaks down above the upper critical dimension. It was shown by Schwartz (1991) that the standard Josephson hyperscaling rule can also break down in Ising systems with quenched random interactions. A related Renormalization Group Theory hyperscaling rule links the critical exponents for the normalized Binder cumulant and the correlation length in the thermodynamic limit. An appropriate scaling approach for analyzing measurements from criticality to infinite temperature is first outlined. Numerical data on the scaling of the normalized correlation length and the normalized Binder cumulant are shown for the canonical Ising ferromagnet model in dimension three where hyperscaling holds, for the Ising ferromagnet in dimension five (so above the upper critical dimension) where hyperscaling breaks down, and then for Ising spin glass models in dimension three where the quenched interactions are random. For the Ising spin glasses there is a breakdown of the normalized Binder cumulant hyperscaling relation in the thermodynamic limit regime, with a return to size independent Binder cumulant values in the finite-size scaling regime around the critical region.

  14. How to manage the cumulative flood safety of catchment dams ...

    African Journals Online (AJOL)

    Dam safety is a significant issue being taken seriously worldwide. However, in Australia, although much attention is being devoted to the medium- to large-scale dams, minimal attention is being paid to the serious potential problems associated with smaller dams, particularly the potential cumulative safety threats they pose ...

  15. Cumulative Beam Breakup due to Resistive-Wall Wake

    International Nuclear Information System (INIS)

    Wang, J.-M.

    2004-01-01

    The cumulative beam breakup problem excited by the resistive-wall wake is formulated. An approximate analytic method of finding the asymptotic behavior for the transverse bunch displacement is developed and solved. Comparison between the asymptotic analytical expression and the direct numerical solution is presented. Good agreement is found. The criterion of using the asymptotic analytical expression is discussed

  16. Analysis of sensory ratings data with cumulative link models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Brockhoff, Per B.

    2013-01-01

    Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow for reg...

  17. Tests of Cumulative Prospect Theory with graphical displays of probability

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2008-10-01

    Full Text Available Recent research reported evidence that contradicts cumulative prospect theory and the priority heuristic. The same body of research also violates two editing principles of original prospect theory: cancellation (the principle that people delete any attribute that is the same in both alternatives before deciding between them and combination (the principle that people combine branches leading to the same consequence by adding their probabilities. This study was designed to replicate previous results and to test whether the violations of cumulative prospect theory might be eliminated or reduced by using formats for presentation of risky gambles in which cancellation and combination could be facilitated visually. Contrary to the idea that decision behavior contradicting cumulative prospect theory and the priority heuristic would be altered by use of these formats, however, data with two new graphical formats as well as fresh replication data continued to show the patterns of evidence that violate cumulative prospect theory, the priority heuristic, and the editing principles of combination and cancellation. Systematic violations of restricted branch independence also contradicted predictions of ``stripped'' prospect theory (subjectively weighted additive utility without the editing rules.

  18. Implications of applying cumulative risk assessment to the workplace.

    Science.gov (United States)

    Fox, Mary A; Spicer, Kristen; Chosewood, L Casey; Susi, Pam; Johns, Douglas O; Dotson, G Scott

    2018-06-01

    Multiple changes are influencing work, workplaces and workers in the US including shifts in the main types of work and the rise of the 'gig' economy. Work and workplace changes have coincided with a decline in unions and associated advocacy for improved safety and health conditions. Risk assessment has been the primary method to inform occupational and environmental health policy and management for many types of hazards. Although often focused on one hazard at a time, risk assessment frameworks and methods have advanced toward cumulative risk assessment recognizing that exposure to a single chemical or non-chemical stressor rarely occurs in isolation. We explore how applying cumulative risk approaches may change the roles of workers and employers as they pursue improved health and safety and elucidate some of the challenges and opportunities that might arise. Application of cumulative risk assessment should result in better understanding of complex exposures and health risks with the potential to inform more effective controls and improved safety and health risk management overall. Roles and responsibilities of both employers and workers are anticipated to change with potential for a greater burden of responsibility on workers to address risk factors both inside and outside the workplace that affect health at work. A range of policies, guidance and training have helped develop cumulative risk assessment for the environmental health field and similar approaches are available to foster the practice in occupational safety and health. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Hierarchical Bayesian parameter estimation for cumulative prospect theory

    NARCIS (Netherlands)

    Nilsson, H.; Rieskamp, J.; Wagenmakers, E.-J.

    2011-01-01

    Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and

  20. An Axiomatization of Cumulative Prospect Theory for Decision under Risk

    NARCIS (Netherlands)

    Wakker, P.P.; Chateauneuf, A.

    1999-01-01

    Cumulative prospect theory was introduced by Tversky and Kahneman so as to combine the empirical realism of their original prospect theory with the theoretical advantages of Quiggin's rank-dependent utility. Preference axiomatizations were provided in several papers. All those axiomatizations,

  1. Cumulative assessment: does it improve students’ knowledge acquisition and retention?

    NARCIS (Netherlands)

    Cecilio Fernandes, Dario; Nagtegaal, Manouk; Noordzij, Gera; Tio, Rene

    2017-01-01

    Introduction Assessment for learning means changing students’ behaviour regarding their learning. Cumulative assessment has been shown to increase students’ self-study time and spread their study time throughout a course. However, there was no difference regarding students’ knowledge at the end of

  2. Renormalization group theory of earthquakes

    Directory of Open Access Journals (Sweden)

    H. Saleur

    1996-01-01

    Full Text Available We study theoretically the physical origin of the proposed discrete scale invariance of earthquake processes, at the origin of the universal log-periodic corrections to scaling, recently discovered in regional seismic activity (Sornette and Sammis (1995. The discrete scaling symmetries which may be present at smaller scales are shown to be robust on a global scale with respect to disorder. Furthermore, a single complex exponent is sufficient in practice to capture the essential properties of the leading correction to scaling, whose real part may be renormalized by disorder, and thus be specific to the system. We then propose a new mechanism for discrete scale invariance, based on the interplay between dynamics and disorder. The existence of non-linear corrections to the renormalization group flow implies that an earthquake is not an isolated 'critical point', but is accompanied by an embedded set of 'critical points', its foreshocks and any subsequent shocks for which it may be a foreshock.

  3. The 2016 Kumamoto earthquake sequence.

    Science.gov (United States)

    Kato, Aitaro; Nakamura, Kouji; Hiyama, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An M j 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an M j 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest.

  4. Earthquake lights and rupture processes

    Directory of Open Access Journals (Sweden)

    T. V. Losseva

    2005-01-01

    Full Text Available A physical model of earthquake lights is proposed. It is suggested that the magnetic diffusion from the electric and magnetic fields source region is a dominant process, explaining rather high localization of the light flashes. A 3D numerical code allowing to take into account the arbitrary distribution of currents caused by ground motion, conductivity in the ground and at its surface, including the existence of sea water above the epicenter or (and near the ruptured segments of the fault have been developed. Simulations for the 1995 Kobe earthquake were conducted taking into account the existence of sea water with realistic geometry of shores. The results do not contradict the eyewitness reports and scarce measurements of the electric and magnetic fields at large distances from the epicenter.

  5. The 2016 Kumamoto earthquake sequence

    Science.gov (United States)

    KATO, Aitaro; NAKAMURA, Kouji; HIYAMA, Yohei

    2016-01-01

    Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An Mj 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an Mj 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest. PMID:27725474

  6. Dim prospects for earthquake prediction

    Science.gov (United States)

    Geller, Robert J.

    I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

  7. On the plant operators performance during earthquake

    International Nuclear Information System (INIS)

    Kitada, Y.; Yoshimura, S.; Abe, M.; Niwa, H.; Yoneda, T.; Matsunaga, M.; Suzuki, T.

    1994-01-01

    There is little data on which to judge the performance of plant operators during and after strong earthquakes. In order to obtain such data to enhance the reliability on the plant operation, a Japanese utility and a power plant manufacturer carried out a vibration test using a shaking table. The purpose of the test was to investigate operator performance, i.e., the quickness and correctness in switch handling and panel meter read-out. The movement of chairs during earthquake as also of interest, because if the chairs moved significantly or turned over during a strong earthquake, some arresting mechanism would be required for the chair. Although there were differences between the simulated earthquake motions used and actual earthquakes mainly due to the specifications of the shaking table, the earthquake motions had almost no influence on the operators of their capability (performance) for operating the simulated console and the personal computers

  8. Earthquake evaluation of a substation network

    International Nuclear Information System (INIS)

    Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.

    1991-01-01

    The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system

  9. Earthquake forewarning in the Cascadia region

    Science.gov (United States)

    Gomberg, Joan S.; Atwater, Brian F.; Beeler, Nicholas M.; Bodin, Paul; Davis, Earl; Frankel, Arthur; Hayes, Gavin P.; McConnell, Laura; Melbourne, Tim; Oppenheimer, David H.; Parrish, John G.; Roeloffs, Evelyn A.; Rogers, Gary D.; Sherrod, Brian; Vidale, John; Walsh, Timothy J.; Weaver, Craig S.; Whitmore, Paul M.

    2015-08-10

    This report, prepared for the National Earthquake Prediction Evaluation Council (NEPEC), is intended as a step toward improving communications about earthquake hazards between information providers and users who coordinate emergency-response activities in the Cascadia region of the Pacific Northwest. NEPEC charged a subcommittee of scientists with writing this report about forewarnings of increased probabilities of a damaging earthquake. We begin by clarifying some terminology; a “prediction” refers to a deterministic statement that a particular future earthquake will or will not occur. In contrast to the 0- or 100-percent likelihood of a deterministic prediction, a “forecast” describes the probability of an earthquake occurring, which may range from >0 to processes or conditions, which may include Increased rates of M>4 earthquakes on the plate interface north of the Mendocino region 

  10. Data base pertinent to earthquake design basis

    International Nuclear Information System (INIS)

    Sharma, R.D.

    1988-01-01

    Mitigation of earthquake risk from impending strong earthquakes is possible provided the hazard can be assessed, and translated into appropriate design inputs. This requires defining the seismic risk problem, isolating the risk factors and quantifying risk in terms of physical parameters, which are suitable for application in design. Like all other geological phenomena, past earthquakes hold the key to the understanding of future ones. Quantificatio n of seismic risk at a site calls for investigating the earthquake aspects of the site region and building a data base. The scope of such investigations is il lustrated in Figure 1 and 2. A more detailed definition of the earthquake problem in engineering design is given elsewhere (Sharma, 1987). The present document discusses the earthquake data base, which is required to support a seismic risk evaluation programme in the context of the existing state of the art. (author). 8 tables, 10 figs., 54 refs

  11. The challenges and opportunities in cumulative effects assessment

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Melissa M., E-mail: mfoley@usgs.gov [U.S. Geological Survey, Pacific Coastal and Marine Science Center, 400 Natural Bridges, Dr., Santa Cruz, CA 95060 (United States); Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Mease, Lindley A., E-mail: lamease@stanford.edu [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Martone, Rebecca G., E-mail: rmartone@stanford.edu [Center for Ocean Solutions, Stanford University, 99 Pacific St., Monterey, CA 93940 (United States); Prahler, Erin E. [Center for Ocean Solutions, Stanford University, 473 Via Ortega, Stanford, CA 94305 (United States); Morrison, Tiffany H., E-mail: tiffany.morrison@jcu.edu.au [ARC Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, QLD, 4811 (Australia); Murray, Cathryn Clarke, E-mail: cmurray@pices.int [WWF-Canada, 409 Granville Street, Suite 1588, Vancouver, BC V6C 1T2 (Canada); Wojcik, Deborah, E-mail: deb.wojcik@duke.edu [Nicholas School for the Environment, Duke University, 9 Circuit Dr., Durham, NC 27708 (United States)

    2017-01-15

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  12. The challenges and opportunities in cumulative effects assessment

    International Nuclear Information System (INIS)

    Foley, Melissa M.; Mease, Lindley A.; Martone, Rebecca G.; Prahler, Erin E.; Morrison, Tiffany H.; Murray, Cathryn Clarke; Wojcik, Deborah

    2017-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  13. The challenges and opportunities in cumulative effects assessment

    Science.gov (United States)

    Foley, Melissa M.; Mease, Lindley A; Martone, Rebecca G; Prahler, Erin E; Morrison, Tiffany H; Clarke Murray, Cathryn; Wojcik, Deborah

    2016-01-01

    The cumulative effects of increasing human use of the ocean and coastal zone have contributed to a rapid decline in ocean and coastal resources. As a result, scientists are investigating how multiple, overlapping stressors accumulate in the environment and impact ecosystems. These investigations are the foundation for the development of new tools that account for and predict cumulative effects in order to more adequately prevent or mitigate negative effects. Despite scientific advances, legal requirements, and management guidance, those who conduct assessments—including resource managers, agency staff, and consultants—continue to struggle to thoroughly evaluate cumulative effects, particularly as part of the environmental assessment process. Even though 45 years have passed since the United States National Environmental Policy Act was enacted, which set a precedent for environmental assessment around the world, defining impacts, baseline, scale, and significance are still major challenges associated with assessing cumulative effects. In addition, we know little about how practitioners tackle these challenges or how assessment aligns with current scientific recommendations. To shed more light on these challenges and gaps, we undertook a comparative study on how cumulative effects assessment (CEA) is conducted by practitioners operating under some of the most well-developed environmental laws around the globe: California, USA; British Columbia, Canada; Queensland, Australia; and New Zealand. We found that practitioners used a broad and varied definition of impact for CEA, which led to differences in how baseline, scale, and significance were determined. We also found that practice and science are not closely aligned and, as such, we highlight opportunities for managers, policy makers, practitioners, and scientists to improve environmental assessment.

  14. Understanding Great Earthquakes in Japan's Kanto Region

    Science.gov (United States)

    Kobayashi, Reiji; Curewitz, Daniel

    2008-10-01

    Third International Workshop on the Kanto Asperity Project; Chiba, Japan, 16-19 February 2008; The 1703 (Genroku) and 1923 (Taisho) earthquakes in Japan's Kanto region (M 8.2 and M 7.9, respectively) caused severe damage in the Tokyo metropolitan area. These great earthquakes occurred along the Sagami Trough, where the Philippine Sea slab is subducting beneath Japan. Historical records, paleoseismological research, and geophysical/geodetic monitoring in the region indicate that such great earthquakes will repeat in the future.

  15. Earthquake-triggered landslides in southwest China

    OpenAIRE

    X. L. Chen; Q. Zhou; H. Ran; R. Dong

    2012-01-01

    Southwest China is located in the southeastern margin of the Tibetan Plateau and it is a region of high seismic activity. Historically, strong earthquakes that occurred here usually generated lots of landslides and brought destructive damages. This paper introduces several earthquake-triggered landslide events in this region and describes their characteristics. Also, the historical data of earthquakes with a magnitude of 7.0 or greater, having occurred in this region, is col...

  16. Measurement of four-particle cumulants and symmetric cumulants with subevent methods in small collision systems with the ATLAS detector

    CERN Document Server

    Derendarz, Dominik; The ATLAS collaboration

    2018-01-01

    Measurements of symmetric cumulants SC(n,m)=⟨v2nv2m⟩−⟨v2n⟩⟨v2m⟩ for (n,m)=(2,3) and (2,4) and asymmetric cumulant AC(n) are presented in pp, p+Pb and peripheral Pb+Pb collisions at various collision energies, aiming to probe the long-range collective nature of multi-particle production in small systems. Results are obtained using the standard cumulant method, as well as the two-subevent and three-subevent cumulant methods. Results from the standard method are found to be strongly biased by non-flow correlations as indicated by strong sensitivity to the chosen event class definition. A systematic reduction of non-flow effects is observed when using the two-subevent method and the results become independent of event class definition when the three-subevent method is used. The measured SC(n,m) shows an anti-correlation between v2 and v3, and a positive correlation between v2 and v4. The magnitude of SC(n,m) is constant with Nch in pp collisions, but increases with Nch in p+Pb and Pb+Pb collisions. ...

  17. Retrospective analysis of the Spitak earthquake

    Directory of Open Access Journals (Sweden)

    A. K. Tovmassian

    1995-06-01

    Full Text Available Based on the retrospective analysis of numerous data and studies of the Spitak earthquake the present work at- tempts to shed light on different aspects of that catastrophic seismic event which occurred in Northern Arme- nia on December 7, 1988. The authors follow a chronological order of presentation, namely: changes in geo- sphere, atmosphere, biosphere during the preparation of the Spitak earthquake, foreshocks, main shock, after- shocks, focal mechanisms, historical seismicity; seismotectonic position of the source, strong motion records, site effects; the macroseismic effect, collapse of buildings and structures; rescue activities; earthquake conse- quences; and the lessons of the Spitak earthquake.

  18. Smoking prevalence increases following Canterbury earthquakes.

    Science.gov (United States)

    Erskine, Nick; Daley, Vivien; Stevenson, Sue; Rhodes, Bronwen; Beckert, Lutz

    2013-01-01

    A magnitude 7.1 earthquake hit Canterbury in September 2010. This earthquake and associated aftershocks took the lives of 185 people and drastically changed residents' living, working, and social conditions. To explore the impact of the earthquakes on smoking status and levels of tobacco consumption in the residents of Christchurch. Semistructured interviews were carried out in two city malls and the central bus exchange 15 months after the first earthquake. A total of 1001 people were interviewed. In August 2010, prior to any earthquake, 409 (41%) participants had never smoked, 273 (27%) were currently smoking, and 316 (32%) were ex-smokers. Since the September 2010 earthquake, 76 (24%) of the 316 ex-smokers had smoked at least one cigarette and 29 (38.2%) had smoked more than 100 cigarettes. Of the 273 participants who were current smokers in August 2010, 93 (34.1%) had increased consumption following the earthquake, 94 (34.4%) had not changed, and 86 (31.5%) had decreased their consumption. 53 (57%) of the 93 people whose consumption increased reported that the earthquake and subsequent lifestyle changes as a reason to increase smoking. 24% of ex-smokers resumed smoking following the earthquake, resulting in increased smoking prevalence. Tobacco consumption levels increased in around one-third of current smokers.

  19. Thermal infrared anomalies of several strong earthquakes.

    Science.gov (United States)

    Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying

    2013-01-01

    In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.

  20. Real Time Earthquake Information System in Japan

    Science.gov (United States)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  1. Impact- and earthquake- proof roof structure

    International Nuclear Information System (INIS)

    Shohara, Ryoichi.

    1990-01-01

    Building roofs are constituted with roof slabs, an earthquake proof layer at the upper surface thereof and an impact proof layer made of iron-reinforced concrete disposed further thereover. Since the roofs constitute an earthquake proof structure loading building dampers on the upper surface of the slabs by the concrete layer, seismic inputs of earthquakes to the buildings can be moderated and the impact-proof layer is formed, to ensure the safety to external conditions such as earthquakes or falling accidents of airplane in important facilities such as reactor buildings. (T.M.)

  2. A minimalist model of characteristic earthquakes

    DEFF Research Database (Denmark)

    Vázquez-Prada, M.; González, Á.; Gómez, J.B.

    2002-01-01

    In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....

  3. Global Significant Earthquake Database, 2150 BC to present

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Significant Earthquake Database is a global listing of over 5,700 earthquakes from 2150 BC to the present. A significant earthquake is classified as one that...

  4. Width of surface rupture zone for thrust earthquakes: implications for earthquake fault zoning

    Directory of Open Access Journals (Sweden)

    P. Boncio

    2018-01-01

    remove outliers (e.g. 90 % probability of the cumulative distribution function and define the zone where the likelihood of having surface ruptures is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary (the highest level of SM, i.e. Level 3 SM according to Italian guidelines. In the absence of such a very detailed study (basic SM, i.e. Level 1 SM of Italian guidelines a width of  ∼  840 m (90 % probability from "simple thrust" database of distributed ruptures, excluding B-M, F-S and Sy fault ruptures is suggested to be sufficiently precautionary. For more detailed SM, where the fault is carefully mapped, one must consider that the highest SFRH is concentrated in a narrow zone,  ∼ 60 m in width, that should be considered as a fault avoidance zone (more than one-third of the distributed ruptures are expected to occur within this zone. The fault rupture hazard zones should be asymmetric compared to the trace of the principal fault. The average footwall to hanging wall ratio (FW  :  HW is close to 1  :  2 in all analysed cases. These criteria are applicable to "simple thrust" faults, without considering possible B-M or F-S fault ruptures due to large-scale folding, and without considering sympathetic slip on distant faults. Areas potentially susceptible to B-M or F-S fault ruptures should have their own zones of fault rupture hazard that can be defined by detailed knowledge of the structural setting of the area (shape, wavelength, tightness and lithology of the thrust-related large-scale folds and by geomorphic evidence of past secondary faulting. Distant active faults, potentially susceptible to sympathetic triggering, should be zoned as separate principal faults. The entire database of distributed ruptures (including B-M, F-S and Sy fault ruptures can be useful in poorly known areas

  5. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  6. Cumulants of heat transfer across nonlinear quantum systems

    Science.gov (United States)

    Li, Huanan; Agarwalla, Bijay Kumar; Li, Baowen; Wang, Jian-Sheng

    2013-12-01

    We consider thermal conduction across a general nonlinear phononic junction. Based on two-time observation protocol and the nonequilibrium Green's function method, heat transfer in steady-state regimes is studied, and practical formulas for the calculation of the cumulant generating function are obtained. As an application, the general formalism is used to study anharmonic effects on fluctuation of steady-state heat transfer across a single-site junction with a quartic nonlinear on-site pinning potential. An explicit nonlinear modification to the cumulant generating function exact up to the first order is given, in which the Gallavotti-Cohen fluctuation symmetry is found still valid. Numerically a self-consistent procedure is introduced, which works well for strong nonlinearity.

  7. A cumulant functional for static and dynamic correlation

    International Nuclear Information System (INIS)

    Hollett, Joshua W.; Hosseini, Hessam; Menzies, Cameron

    2016-01-01

    A functional for the cumulant energy is introduced. The functional is composed of a pair-correction and static and dynamic correlation energy components. The pair-correction and static correlation energies are functionals of the natural orbitals and the occupancy transferred between near-degenerate orbital pairs, rather than the orbital occupancies themselves. The dynamic correlation energy is a functional of the statically correlated on-top two-electron density. The on-top density functional used in this study is the well-known Colle-Salvetti functional. Using the cc-pVTZ basis set, the functional effectively models the bond dissociation of H 2 , LiH, and N 2 with equilibrium bond lengths and dissociation energies comparable to those provided by multireference second-order perturbation theory. The performance of the cumulant functional is less impressive for HF and F 2 , mainly due to an underestimation of the dynamic correlation energy by the Colle-Salvetti functional.

  8. Fragmentation of tensor polarized deuterons into cumulative pions

    International Nuclear Information System (INIS)

    Afanas'ev, S.; Arkhipov, V.; Bondarev, V.

    1998-01-01

    The tensor analyzing power T 20 of the reaction d polarized + A → π - (0 0 ) + X has been measured in the fragmentation of 9 GeV tensor polarized deuterons into pions with momenta from 3.5 to 5.3 GeV/c on hydrogen, beryllium and carbon targets. This kinematic range corresponds to the region of cumulative hadron production with the cumulative variable x c from 1.08 to 1.76. The values of T 20 have been found to be small and consistent with positive values. This contradicts the predictions based on a direct mechanism assuming NN collision between a high momentum nucleon in the deuteron and a target nucleon (NN → NNπ)

  9. Experience of cumulative effects assessment in the UK

    Directory of Open Access Journals (Sweden)

    Piper Jake

    2004-01-01

    Full Text Available Cumulative effects assessment (CEA is a development of environmental impact assessment which attempts to take into account the wider picture of what impacts may affect the environment as a result of either multiple or linear projects, or development plans. CEA is seen as a further valuable tool in promoting sustainable development. The broader canvas upon which the assessment is made leads to a suite of issues such as complexity in methods and assessment of significance, the desirability of co-operation between developers and other parties, new ways of addressing mitigation and monitoring. After outlining the legislative position and the process of CEA, this paper looks at three cases studies in the UK where cumulative assessment has been carried out - the cases concern wind farms, major infrastructure and off-shore developments.

  10. Ecosystem assessment methods for cumulative effects at the regional scale

    International Nuclear Information System (INIS)

    Hunsaker, C.T.

    1989-01-01

    Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale

  11. Polarization in high Psub(trans) and cumulative hadron production

    International Nuclear Information System (INIS)

    Efremov, A.V.

    1978-01-01

    The final hadron polarization in the high Psub(trans) processes is analyzed in the parton hard scattering picture. Scaling assumption allows a correct qualitative description to be given for the Psub(trans)-behaviour of polarization or escape angle behaviour in cumulative production. The energy scaling and weak dependence on the beam and target type is predicted. A method is proposed for measuring the polarization of hadron jets

  12. Seasonal climate change patterns due to cumulative CO2 emissions

    Science.gov (United States)

    Partanen, Antti-Ilari; Leduc, Martin; Damon Matthews, H.

    2017-07-01

    Cumulative CO2 emissions are near linearly related to both global and regional changes in annual-mean surface temperature. These relationships are known as the transient climate response to cumulative CO2 emissions (TCRE) and the regional TCRE (RTCRE), and have been shown to remain approximately constant over a wide range of cumulative emissions. Here, we assessed how well this relationship holds for seasonal patterns of temperature change, as well as for annual-mean and seasonal precipitation patterns. We analyzed an idealized scenario with CO2 concentration growing at an annual rate of 1% using data from 12 Earth system models from the Coupled Model Intercomparison Project Phase 5 (CMIP5). Seasonal RTCRE values for temperature varied considerably, with the highest seasonal variation evident in the Arctic, where RTCRE was about 5.5 °C per Tt C for boreal winter and about 2.0 °C per Tt C for boreal summer. Also the precipitation response in the Arctic during boreal winter was stronger than during other seasons. We found that emission-normalized seasonal patterns of temperature change were relatively robust with respect to time, though they were sub-linear with respect to emissions particularly near the Arctic. Moreover, RTCRE patterns for precipitation could not be quantified robustly due to the large internal variability of precipitation. Our results suggest that cumulative CO2 emissions are a useful metric to predict regional and seasonal changes in precipitation and temperature. This extension of the TCRE framework to seasonal and regional climate change is helpful for communicating the link between emissions and climate change to policy-makers and the general public, and is well-suited for impact studies that could make use of estimated regional-scale climate changes that are consistent with the carbon budgets associated with global temperature targets.

  13. Firm heterogeneity, Rules of Origin and Rules of Cumulation

    OpenAIRE

    Bombarda , Pamela; Gamberoni , Elisa

    2013-01-01

    We analyse the impact of relaxing rules of origin (ROOs) in a simple setting with heterogeneous firms that buy intermediate inputs from domestic and foreign sources. In particular, we consider the impact of switching from bilateral to diagonal cumulation when using preferences (instead of paying the MFN tariff) involving the respect of rules of origin. We find that relaxing the restrictiveness of the ROOs leads the least productive exporters to stop exporting. The empirical part confirms thes...

  14. Cumulant approach to dynamical correlation functions at finite temperatures

    International Nuclear Information System (INIS)

    Tran Minhtien.

    1993-11-01

    A new theoretical approach, based on the introduction of cumulants, to calculate thermodynamic averages and dynamical correlation functions at finite temperatures is developed. The method is formulated in Liouville instead of Hilbert space and can be applied to operators which do not require to satisfy fermion or boson commutation relations. The application of the partitioning and projection methods for the dynamical correlation functions is discussed. The present method can be applied to weakly as well as to strongly correlated systems. (author). 9 refs

  15. Severe occupational hand eczema, job stress and cumulative sickness absence.

    Science.gov (United States)

    Böhm, D; Stock Gissendanner, S; Finkeldey, F; John, S M; Werfel, T; Diepgen, T L; Breuer, K

    2014-10-01

    Stress is known to activate or exacerbate dermatoses, but the relationships between chronic stress, job-related stress and sickness absence among occupational hand eczema (OHE) patients are inadequately understood. To see whether chronic stress or burnout symptoms were associated with cumulative sickness absence in patients with OHE and to determine which factors predicted sickness absence in a model including measures of job-related and chronic stress. We investigated correlations of these factors in employed adult inpatients with a history of sickness absence due to OHE in a retrospective cross-sectional explorative study, which assessed chronic stress (Trier Inventory for the Assessment of Chronic Stress), burnout (Shirom Melamed Burnout Measure), clinical symptom severity (Osnabrück Hand Eczema Severity Index), perceived symptom severity, demographic characteristics and cumulative days of sickness absence. The study group consisted of 122 patients. OHE symptoms were not more severe among patients experiencing greater stress and burnout. Women reported higher levels of chronic stress on some measures. Cumulative days of sickness absence correlated with individual dimensions of job-related stress and, in multiple regression analysis, with an overall measure of chronic stress. Chronic stress is an additional factor predicting cumulative sickness absence among severely affected OHE patients. Other relevant factors for this study sample included the 'cognitive weariness' subscale of the Shirom Melamed Burnout Measure and the physical component summary score of the SF-36, a measure of health-related life quality. Prevention and rehabilitation should take job stress into consideration in multidisciplinary treatment strategies for severely affected OHE patients. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Finite-volume cumulant expansion in QCD-colorless plasma

    Energy Technology Data Exchange (ETDEWEB)

    Ladrem, M. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Physics Department, Algiers (Algeria); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ahmed, M.A.A. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Taiz University in Turba, Physics Department, Taiz (Yemen); Alfull, Z.Z. [Taibah University, Physics Department, Faculty of Science, Al-Madinah, Al-Munawwarah (Saudi Arabia); Cherif, S. [ENS-Vieux Kouba (Bachir El-Ibrahimi), Laboratoire de Physique et de Mathematiques Appliquees (LPMA), Algiers (Algeria); Ghardaia University, Sciences and Technologies Department, Ghardaia (Algeria)

    2015-09-15

    Due to the finite-size effects, the localization of the phase transition in finite systems and the determination of its order, become an extremely difficult task, even in the simplest known cases. In order to identify and locate the finite-volume transition point T{sub 0}(V) of the QCD deconfinement phase transition to a colorless QGP, we have developed a new approach using the finite-size cumulant expansion of the order parameter and the L{sub mn}-method. The first six cumulants C{sub 1,2,3,4,5,6} with the corresponding under-normalized ratios (skewness Σ, kurtosis κ, pentosis Π{sub ±}, and hexosis H{sub 1,2,3}) and three unnormalized combinations of them, (O = σ{sup 2}κΣ{sup -1},U = σ{sup -2}Σ{sup -1},N = σ{sup 2}κ) are calculated and studied as functions of (T, V). A new approach, unifying in a clear and consistent way the definitions of cumulant ratios, is proposed.Anumerical FSS analysis of the obtained results has allowed us to locate accurately the finite-volume transition point. The extracted transition temperature value T{sub 0}(V) agrees with that expected T{sub 0}{sup N}(V) from the order parameter and the thermal susceptibility χ{sub T} (T, V), according to the standard procedure of localization to within about 2%. In addition to this, a very good correlation factor is obtained proving the validity of our cumulants method. The agreement of our results with those obtained by means of other models is remarkable. (orig.)

  17. Science and Societal Partnerships to Address Cumulative Impacts

    OpenAIRE

    Lundquist, Carolyn J.; Fisher, Karen T.; Le Heron, Richard; Lewis, Nick I.; Ellis, Joanne I.; Hewitt, Judi E.; Greenaway, Alison J.; Cartner, Katie J.; Burgess-Jones, Tracey C.; Schiel, David R.; Thrush, Simon F.

    2016-01-01

    Funding and priorities for ocean research are not separate from the underlying sociological, economic, and political landscapes that determine values attributed to ecological systems. Here we present a variation on science prioritization exercises, focussing on inter-disciplinary research questions with the objective of shifting broad scale management practices to better address cumulative impacts and multiple users. Marine scientists in New Zealand from a broad range of scientific and social...

  18. Cumulative prospect theory and mean variance analysis. A rigorous comparison

    OpenAIRE

    Hens, Thorsten; Mayer, Janos

    2012-01-01

    We compare asset allocations derived for cumulative prospect theory(CPT) based on two different methods: Maximizing CPT along the mean–variance efficient frontier and maximizing it without that restriction. We find that with normally distributed returns the difference is negligible. However, using standard asset allocation data of pension funds the difference is considerable. Moreover, with derivatives like call options the restriction to the mean-variance efficient frontier results in a siza...

  19. Signal anomaly detection using modified CUSUM [cumulative sum] method

    International Nuclear Information System (INIS)

    Morgenstern, V.; Upadhyaya, B.R.; Benedetti, M.

    1988-01-01

    An important aspect of detection of anomalies in signals is the identification of changes in signal behavior caused by noise, jumps, changes in band-width, sudden pulses and signal bias. A methodology is developed to identify, isolate and characterize these anomalies using a modification of the cumulative sum (CUSUM) approach. The new algorithm performs anomaly detection at three levels and is implemented on a general purpose computer. 7 refs., 4 figs

  20. GEM - The Global Earthquake Model

    Science.gov (United States)

    Smolka, A.

    2009-04-01

    Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a

  1. Pattern of ground deformation in Kathmandu valley during 2015 Gorkha Earthquake, central Nepal

    Science.gov (United States)

    Ghimire, S.; Dwivedi, S. K.; Acharya, K. K.

    2016-12-01

    The 25th April 2015 Gorkha Earthquake (Mw=7.8) epicentered at Barpak along with thousands of aftershocks released seismic moment nearly equivalent to an 8.0 Magnitude earthquake rupturing a 150km long fault segment. Although Kathmandu valley was supposed to be severely devastated by such major earthquake, post earthquake scenario is completely different. The observed destruction is far less than anticipated as well as the spatial pattern is different than expected. This work focuses on the behavior of Kathmandu valley sediments during the strong shaking by the 2015 Gorkha Earthquake. For this purpose spatial pattern of destruction is analyzed at heavily destructed sites. To understand characteristics of subsurface soil 2D-MASW survey was carried out using a 24-channel seismograph system. An accellerogram recorded by Nepal Seismological Center was analyzed to characterize the strong ground motion. The Kathmandu valley comprises fluvio-lacustrine deposit with gravel, sand, silt and clay along with few exposures of basement rocks within the sediments. The observations show systematic repetition of destruction at an average interval of 2.5km mostly in sand, silt and clay dominated formations. Results of 2D-MASW show the sites of destruction are characterized by static deformation of soil (liquefaction and southerly dipping cracks). Spectral analysis of the accelerogram indicates maximum power associated with frequency of 1.0Hz. The result of this study explains the observed spatial pattern of destruction in Kathmandu valley. This is correlated with the seismic energy associated with the frequency of 1Hz, which generates an average wavelength of 2.5km with an average S-wave velocity of 2.5km/s. The cumulative effect of dominant frequency and associated wavelength resulted in static deformation of surface soil layers at an average interval of 2.5km. This phenomenon clearly describes the reason for different scenario than that was anticipated in Kathmandu valley.

  2. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    Science.gov (United States)

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  3. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  4. Problems of describing the cumulative effect in relativistic nuclear physics

    International Nuclear Information System (INIS)

    Baldin, A.M.

    1979-01-01

    The problem of describing the cumulative effect i.e., the particle production on nuclei in the range kinematically forbidden for one-nucleon collisions, is studied. Discrimination of events containing cumulative particles fixes configurations in the wave function of a nucleus, when several nucleons are closely spaced and their quark-parton components are collectivized. For the cumulative processes under consideration large distances between quarks are very important. The fundamental facts and theoretical interpretation of the quantum field theory and of the condensed media theory in the relativistic nuclear physics are presented in brief. The collisions of the relativistic nuclei with low momentum transfers is considered in a fast moving coordinate system. The basic parameter determining this type of collisions is the energy of nucleon binding in nuclei. It has been shown that the short-range correlation model provides a good presentation of many characteristics of the multiple particle production and it may be regarded as an approximate universal property of hadron interactions

  5. Dynamic prediction of cumulative incidence functions by direct binomial regression.

    Science.gov (United States)

    Grand, Mia K; de Witte, Theo J M; Putter, Hein

    2018-03-25

    In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    Directory of Open Access Journals (Sweden)

    Margaret M. MacDonell

    2013-01-01

    Full Text Available The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1 planning, scoping, and problem formulation; (2 environmental fate and transport; (3 exposure analysis extending to human factors; (4 toxicity analysis; and (5 risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities.

  7. Energy Current Cumulants in One-Dimensional Systems in Equilibrium

    Science.gov (United States)

    Dhar, Abhishek; Saito, Keiji; Roy, Anjan

    2018-06-01

    A recent theory based on fluctuating hydrodynamics predicts that one-dimensional interacting systems with particle, momentum, and energy conservation exhibit anomalous transport that falls into two main universality classes. The classification is based on behavior of equilibrium dynamical correlations of the conserved quantities. One class is characterized by sound modes with Kardar-Parisi-Zhang scaling, while the second class has diffusive sound modes. The heat mode follows Lévy statistics, with different exponents for the two classes. Here we consider heat current fluctuations in two specific systems, which are expected to be in the above two universality classes, namely, a hard particle gas with Hamiltonian dynamics and a harmonic chain with momentum conserving stochastic dynamics. Numerical simulations show completely different system-size dependence of current cumulants in these two systems. We explain this numerical observation using a phenomenological model of Lévy walkers with inputs from fluctuating hydrodynamics. This consistently explains the system-size dependence of heat current fluctuations. For the latter system, we derive the cumulant-generating function from a more microscopic theory, which also gives the same system-size dependence of cumulants.

  8. Preference, resistance to change, and the cumulative decision model.

    Science.gov (United States)

    Grace, Randolph C

    2018-01-01

    According to behavioral momentum theory (Nevin & Grace, 2000a), preference in concurrent chains and resistance to change in multiple schedules are independent measures of a common construct representing reinforcement history. Here I review the original studies on preference and resistance to change in which reinforcement variables were manipulated parametrically, conducted by Nevin, Grace and colleagues between 1997 and 2002, as well as more recent research. The cumulative decision model proposed by Grace and colleagues for concurrent chains is shown to provide a good account of both preference and resistance to change, and is able to predict the increased sensitivity to reinforcer rate and magnitude observed with constant-duration components. Residuals from fits of the cumulative decision model to preference and resistance to change data were positively correlated, supporting the prediction of behavioral momentum theory. Although some questions remain, the learning process assumed by the cumulative decision model, in which outcomes are compared against a criterion that represents the average outcome value in the current context, may provide a plausible model for the acquisition of differential resistance to change. © 2018 Society for the Experimental Analysis of Behavior.

  9. Stakeholder attitudes towards cumulative and aggregate exposure assessment of pesticides.

    Science.gov (United States)

    Verbeke, Wim; Van Loo, Ellen J; Vanhonacker, Filiep; Delcour, Ilse; Spanoghe, Pieter; van Klaveren, Jacob D

    2015-05-01

    This study evaluates the attitudes and perspectives of different stakeholder groups (agricultural producers, pesticide manufacturers, trading companies, retailers, regulators, food safety authorities, scientists and NGOs) towards the concepts of cumulative and aggregate exposure assessment of pesticides by means of qualitative in-depth interviews (n = 15) and a quantitative stakeholder survey (n = 65). The stakeholders involved generally agreed that the use of chemical pesticides is needed, primarily for meeting the need of feeding the growing world population, while clearly acknowledging the problematic nature of human exposure to pesticide residues. Current monitoring was generally perceived to be adequate, but the timeliness and consistency of monitoring practices across countries were questioned. The concept of cumulative exposure assessment was better understood by stakeholders than the concept of aggregate exposure assessment. Identified pitfalls were data availability, data limitations, sources and ways of dealing with uncertainties, as well as information and training needs. Regulators and food safety authorities were perceived as the stakeholder groups for whom cumulative and aggregate pesticide exposure assessment methods and tools would be most useful and acceptable. Insights obtained from this exploratory study have been integrated in the development of targeted and stakeholder-tailored dissemination and training programmes that were implemented within the EU-FP7 project ACROPOLIS. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Stress triggering of the Lushan M7. 0 earthquake by the Wenchuan Ms8. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Wu Jianchao

    2013-08-01

    Full Text Available The Wenchuan Ms8. 0 earthquake and the Lushan M7. 0 earthquake occurred in the north and south segments of the Longmenshan nappe tectonic belt, respectively. Based on the focal mechanism and finite fault model of the Wenchuan Ms8. 0 earthquake, we calculated the coulomb failure stress change. The inverted coulomb stress changes based on the Nishimura and Chenji models both show that the Lushan M7. 0 earthquake occurred in the increased area of coulomb failure stress induced by the Wenchuan Ms8. 0 earthquake. The coulomb failure stress increased by approximately 0. 135 – 0. 152 bar in the source of the Lushan M7. 0 earthquake, which is far more than the stress triggering threshold. Therefore, the Lushan M7. 0 earthquake was most likely triggered by the coulomb failure stress change.

  11. Foreshock occurrence before large earthquakes

    Science.gov (United States)

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  12. Earthquakes, detecting and understanding them

    International Nuclear Information System (INIS)

    2008-05-01

    The signatures at the surface of the Earth is continually changing on a geological timescale. The tectonic plates, which make up this surface, are moving in relation to each other. On human timescale, these movements are the result of earthquakes, which suddenly, release energy accumulated over a period of time. The vibrations they produce propagate through the interior of the Earth: these are seismic waves. However, other phenomena can generate seismic waves, such as volcanoes, quarry blasts, etc. The surf of the ocean waves on the coasts, the wind in the trees and human activity (industry and road traffic) all contribute to the 'seismic background noise'. Sensors are able to detect signals from events which are then discriminated, analyzed and located. Earthquakes and active volcanoes are not distributed randomly over the surface of the globe: they mainly coincide with mountain chains and ocean trenches and ridges. 'An earthquake results from the abrupt release of the energy accumulated by movements and rubbing of different plates'. The study of the propagation of seismic waves has allowed to determine the outline of the plates inside the Earth and has highlighted their movements. There are seven major plates which are colliding, diverging or sliding past each other. Each year the continents move several centimeters with respect to one another. This process, known as 'continental drift', was finally explained by plate tectonics. The initial hypothesis for this science dates from the beginning of the 20. century, but it was not confirmed until the 1960's. It explains that convection inside the Earth is the source of the forces required for these movements. This science, as well as explaining these great movements, has provided a coherent, unifying and quantitative framework, which unites the explanations for all the geophysical phenomena under one mechanism. (authors)

  13. Statistical properties of earthquakes clustering

    Directory of Open Access Journals (Sweden)

    A. Vecchio

    2008-04-01

    Full Text Available Often in nature the temporal distribution of inhomogeneous stochastic point processes can be modeled as a realization of renewal Poisson processes with a variable rate. Here we investigate one of the classical examples, namely, the temporal distribution of earthquakes. We show that this process strongly departs from a Poisson statistics for both catalogue and sequence data sets. This indicate the presence of correlations in the system probably related to the stressing perturbation characterizing the seismicity in the area under analysis. As shown by this analysis, the catalogues, at variance with sequences, show common statistical properties.

  14. Refresher Course on Physics of Earthquakes -98 ...

    Indian Academy of Sciences (India)

    The objective of this course is to help teachers gain an understanding of the earhquake phenomenon and the physical processes involved in its genesis as well as offhe earthquake waves which propagate the energy released by the earthquake rupture outward from the source. The Course will begin with mathematical ...

  15. Tutorial on earthquake rotational effects: historical examples

    Czech Academy of Sciences Publication Activity Database

    Kozák, Jan

    2009-01-01

    Roč. 99, 2B (2009), s. 998-1010 ISSN 0037-1106 Institutional research plan: CEZ:AV0Z30120515 Keywords : rotational seismic models * earthquake rotational effects * historical earthquakes Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.860, year: 2009

  16. Wood-framed houses for earthquake zones

    DEFF Research Database (Denmark)

    Hansen, Klavs Feilberg

    Wood-framed houses with a sheathing are suitable for use in earthquake zones. The Direction describes a method of determining the earthquake forces in a house and shows how these forces can be resisted by diaphragm action in the walls, floors, and roof, of the house. An appendix explains how...

  17. Earthquake effect on the geological environment

    International Nuclear Information System (INIS)

    Kawamura, Makoto

    1999-01-01

    Acceleration caused by the earthquake, changes in the water pressure, and the rock-mass strain were monitored for a series of 344 earthquakes from 1990 to 1998 at Kamaishi In Situ Test Site. The largest acceleration was registered to be 57.14 gal with the earthquake named 'North coast of Iwate Earthquake' (M4.4) occurred in June, 1996. Changes of the water pressure were recorded with 27 earthquakes; the largest change was -0.35 Kgt/cm 2 . The water-pressure change by earthquake was, however, usually smaller than that caused by rainfall in this area. No change in the electric conductivity or pH of ground water was detected before and after the earthquake throughout the entire period of monitoring. The rock-mass strain was measured with a extensometer whose detection limit was of the order of 10 -8 to 10 -9 degrees and the remaining strain of about 2.5x10 -9 degrees was detected following the 'Offshore Miyagi Earthquake' (M5.1) in October, 1997. (H. Baba)

  18. Designing an Earthquake-Resistant Building

    Science.gov (United States)

    English, Lyn D.; King, Donna T.

    2016-01-01

    How do cross-bracing, geometry, and base isolation help buildings withstand earthquakes? These important structural design features involve fundamental geometry that elementary school students can readily model and understand. The problem activity, Designing an Earthquake-Resistant Building, was undertaken by several classes of sixth- grade…

  19. Passive containment system in high earthquake motion

    International Nuclear Information System (INIS)

    Kleimola, F.W.; Falls, O.B. Jr.

    1977-01-01

    High earthquake motion necessitates major design modifications in the complex of plant structures, systems and components in a nuclear power plant. Distinctive features imposed by seismic category, safety class and quality classification requirements for the high seismic ground acceleration loadings significantly reflect in plant costs. The design features in the Passive Containment System (PCS) responding to high earthquake ground motion are described

  20. Napa Earthquake impact on water systems

    Science.gov (United States)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  1. Instruction system upon occurrence of earthquakes

    International Nuclear Information System (INIS)

    Inagaki, Masakatsu; Morikawa, Matsuo; Suzuki, Satoshi; Fukushi, Naomi.

    1987-01-01

    Purpose: To enable rapid re-starting of a nuclear reactor after earthquakes by informing various properties of encountered earthquake to operators and properly displaying the state of damages in comparison with designed standard values of facilities. Constitution: Even in a case where the maximum accelerations due to the movements of earthquakes encountered exceed designed standard values, it may be considered such a case that equipments still remain intact depending on the wave components of the seismic movements and the vibration properties inherent to the equipments. Taking notice of the fact, the instruction device comprises a system that indicates the relationship between the seismic waveforms of earthquakes being encountered and the scram setting values, a system for indicating the comparison between the floor response spectrum of the seismic waveforms of the encountered earthquakes and the designed floor response spectrum used for the design of the equipments and a system for indicating those equipments requiring inspection after the earthquakes. Accordingly, it is possible to improve the operationability upon scram of a nuclear power plant undergoing earthquakes and improve the power saving and safety by clearly defining the inspection portion after the earthquakes. (Kawakami, Y.)

  2. Earthquake Hazard Analysis Methods: A Review

    Science.gov (United States)

    Sari, A. M.; Fakhrurrozi, A.

    2018-02-01

    One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.

  3. How fault geometry controls earthquake magnitude

    Science.gov (United States)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  4. Evolution of costly explicit memory and cumulative culture.

    Science.gov (United States)

    Nakamaru, Mayuko

    2016-06-21

    Humans can acquire new information and modify it (cumulative culture) based on their learning and memory abilities, especially explicit memory, through the processes of encoding, consolidation, storage, and retrieval. Explicit memory is categorized into semantic and episodic memories. Animals have semantic memory, while episodic memory is unique to humans and essential for innovation and the evolution of culture. As both episodic and semantic memory are needed for innovation, the evolution of explicit memory influences the evolution of culture. However, previous theoretical studies have shown that environmental fluctuations influence the evolution of imitation (social learning) and innovation (individual learning) and assume that memory is not an evolutionary trait. If individuals can store and retrieve acquired information properly, they can modify it and innovate new information. Therefore, being able to store and retrieve information is essential from the perspective of cultural evolution. However, if both storage and retrieval were too costly, forgetting and relearning would have an advantage over storing and retrieving acquired information. In this study, using mathematical analysis and individual-based simulations, we investigate whether cumulative culture can promote the coevolution of costly memory and social and individual learning, assuming that cumulative culture improves the fitness of each individual. The conclusions are: (1) without cumulative culture, a social learning cost is essential for the evolution of storage-retrieval. Costly storage-retrieval can evolve with individual learning but costly social learning does not evolve. When low-cost social learning evolves, the repetition of forgetting and learning is favored more than the evolution of costly storage-retrieval, even though a cultural trait improves the fitness. (2) When cumulative culture exists and improves fitness, storage-retrieval can evolve with social and/or individual learning, which

  5. The October 1992 Parkfield, California, earthquake prediction

    Science.gov (United States)

    Langbein, J.

    1992-01-01

    A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predicted earthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield Earthquake Prediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.   

  6. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  7. Low cost earthquake resistant ferrocement small house

    International Nuclear Information System (INIS)

    Saleem, M.A.; Ashraf, M.; Ashraf, M.

    2008-01-01

    The greatest humanitarian challenge faced even today after one year of Kashmir Hazara earthquake is that of providing shelter. Currently on the globe one in seven people live in a slum or refugee camp. The earthquake of October 2005 resulted in a great loss of life and property. This research work is mainly focused on developing a design of small size, low cost and earthquake resistant house. Ferrocement panels are recommended as the main structural elements with lightweight truss roofing system. Earthquake resistance is ensured by analyzing the structure on ETABS for a seismic activity of zone 4. The behavior of structure is found satisfactory under the earthquake loading. An estimate of cost is also presented which shows that it is an economical solution. (author)

  8. Antioptimization of earthquake exitation and response

    Directory of Open Access Journals (Sweden)

    G. Zuccaro

    1998-01-01

    Full Text Available The paper presents a novel approach to predict the response of earthquake-excited structures. The earthquake excitation is expanded in terms of series of deterministic functions. The coefficients of the series are represented as a point in N-dimensional space. Each available ccelerogram at a certain site is then represented as a point in the above space, modeling the available fragmentary historical data. The minimum volume ellipsoid, containing all points, is constructed. The ellipsoidal models of uncertainty, pertinent to earthquake excitation, are developed. The maximum response of a structure, subjected to the earthquake excitation, within ellipsoidal modeling of the latter, is determined. This procedure of determining least favorable response was termed in the literature (Elishakoff, 1991 as an antioptimization. It appears that under inherent uncertainty of earthquake excitation, antioptimization analysis is a viable alternative to stochastic approach.

  9. Ionospheric Anomaly before Kyushu|Japan Earthquake

    Directory of Open Access Journals (Sweden)

    YANG Li

    2017-05-01

    Full Text Available GIM data released by IGS is used in the article and a new method of combining the Sliding Time Window Method and the Ionospheric TEC correlation analysis method of adjacent grid points is proposed to study the relationship between pre-earthquake ionospheric anomalies and earthquake. By analyzing the abnormal change of TEC in the 5 grid points around the seismic region, the abnormal change of ionospheric TEC is found before the earthquake and the correlation between the TEC sequences of lattice points is significantly affected by earthquake. Based on the analysis of the spatial distribution of TEC anomaly, anomalies of 6 h, 12 h and 6 h were found near the epicenter three days before the earthquake. Finally, ionospheric tomographic technology is used to do tomographic inversion on electron density. And the distribution of the electron density in the ionospheric anomaly is further analyzed.

  10. El Carreto o Cumulá - Aspidosperma Dugandii Standl El Carreto o Cumulá - Aspidosperma Dugandii Standl

    Directory of Open Access Journals (Sweden)

    Dugand Armando

    1944-03-01

    Full Text Available Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin duda alguna al A. Dugandii StandI. Por otra parte, Santiago Cortés (FI. Colomb. 206. 1898; ed, 2: 239. 1912 cita el Cumulá "de Anapoima y otros lugares del (rio Magdalena" diciendo que pertenece a las Leguminosas, pero la brevísima descripción que este autor hace de la madera "naranjada y notable por densidad, dureza y resistencia a la humedad", me induce a creer que se trata del mismo Cumula coleccionado recientemente en Tocaima, ya que esta población esta situada a pocos kilómetros de Anapoima. Nombres vulgares: Carreto (Atlántico, Bolívar, Magdalena; Cumulá, Cumulá (Cundinamarca, ToIima. Según el Dr. Emilio Robledo (Lecciones de Bot. ed. 3, 2: 544. 1939 el nombre Carreto también es empleado en Puerto Berrío (Antioquia. El mismo autor (loc. cit. da el nombre Comulá para una especie indeterminada de Viburnum en Mariquita (Tolima y J. M. Duque, refiriendose a la misma planta y localidad (en Bot. Gen. Colomb. 340, 356. 1943 atribuye este nombre vulgar al Aspidosperma ellipticum Rusby.  Sin embargo, las muestras de madera de Cumulá o Comulá que yo he examinado, procedentes de la región de Mariquita -una de las cuales me fue recientemente enviada por el distinguido ictiólogo Sr. Cecil Miles- pertenecen sin

  11. What Can Sounds Tell Us About Earthquake Interactions?

    Science.gov (United States)

    Aiken, C.; Peng, Z.

    2012-12-01

    It is important not only for seismologists but also for educators to effectively convey information about earthquakes and the influences earthquakes can have on each other. Recent studies using auditory display [e.g. Kilb et al., 2012; Peng et al. 2012] have depicted catastrophic earthquakes and the effects large earthquakes can have on other parts of the world. Auditory display of earthquakes, which combines static images with time-compressed sound of recorded seismic data, is a new approach to disseminating information to a general audience about earthquakes and earthquake interactions. Earthquake interactions are influential to understanding the underlying physics of earthquakes and other seismic phenomena such as tremors in addition to their source characteristics (e.g. frequency contents, amplitudes). Earthquake interactions can include, for example, a large, shallow earthquake followed by increased seismicity around the mainshock rupture (i.e. aftershocks) or even a large earthquake triggering earthquakes or tremors several hundreds to thousands of kilometers away [Hill and Prejean, 2007; Peng and Gomberg, 2010]. We use standard tools like MATLAB, QuickTime Pro, and Python to produce animations that illustrate earthquake interactions. Our efforts are focused on producing animations that depict cross-section (side) views of tremors triggered along the San Andreas Fault by distant earthquakes, as well as map (bird's eye) views of mainshock-aftershock sequences such as the 2011/08/23 Mw5.8 Virginia earthquake sequence. These examples of earthquake interactions include sonifying earthquake and tremor catalogs as musical notes (e.g. piano keys) as well as audifying seismic data using time-compression. Our overall goal is to use auditory display to invigorate a general interest in earthquake seismology that leads to the understanding of how earthquakes occur, how earthquakes influence one another as well as tremors, and what the musical properties of these

  12. Earthquake activity along the Himalayan orogenic belt

    Science.gov (United States)

    Bai, L.; Mori, J. J.

    2017-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  13. Seven years of postseismic deformation following the 2003 Mw = 6.8 Zemmouri earthquake (Algeria) from InSAR time series

    KAUST Repository

    Cetin, Esra

    2012-05-28

    We study the postseismic surface deformation of the Mw 6.8, 2003 Zemmouri earthquake (northern Algeria) using the Multi-Temporal Small Baseline InSAR technique. InSAR time series obtained from 31 Envisat ASAR images from 2003 to 2010 reveal sub-cm coastline ground movements between Cap Matifou and Dellys. Two regions display subsidence at a maximum rate of 2 mm/yr in Cap Djenet and 3.5 mm/yr in Boumerdes. These regions correlate well with areas of maximum coseismic uplifts, and their association with two rupture segments. Inverse modeling suggest that subsidence in the areas of high coseismic uplift can be explained by afterslip on shallow sections (<5 km) of the fault above the areas of coseismic slip, in agreement with previous GPS observations. The earthquake impact on soft sediments and the ground water table southwest of the earthquake area, characterizes ground deformation of non-tectonic origin. The cumulative postseismic moment due to 7 years afterslip is equivalent to an Mw 6.3 earthquake. Therefore, the postseismic deformation and stress buildup has significant implications on the earthquake cycle models and recurrence intervals of large earthquakes in the Algiers area.

  14. Seven years of postseismic deformation following the 2003 Mw = 6.8 Zemmouri earthquake (Algeria) from InSAR time series

    KAUST Repository

    Cetin, Esra; Meghraoui, Mustapha; Cakir, Ziyadin; Akoglu, Ahmet M.; Mimouni, Omar; Chebbah, Mouloud

    2012-01-01

    We study the postseismic surface deformation of the Mw 6.8, 2003 Zemmouri earthquake (northern Algeria) using the Multi-Temporal Small Baseline InSAR technique. InSAR time series obtained from 31 Envisat ASAR images from 2003 to 2010 reveal sub-cm coastline ground movements between Cap Matifou and Dellys. Two regions display subsidence at a maximum rate of 2 mm/yr in Cap Djenet and 3.5 mm/yr in Boumerdes. These regions correlate well with areas of maximum coseismic uplifts, and their association with two rupture segments. Inverse modeling suggest that subsidence in the areas of high coseismic uplift can be explained by afterslip on shallow sections (<5 km) of the fault above the areas of coseismic slip, in agreement with previous GPS observations. The earthquake impact on soft sediments and the ground water table southwest of the earthquake area, characterizes ground deformation of non-tectonic origin. The cumulative postseismic moment due to 7 years afterslip is equivalent to an Mw 6.3 earthquake. Therefore, the postseismic deformation and stress buildup has significant implications on the earthquake cycle models and recurrence intervals of large earthquakes in the Algiers area.

  15. Earthquake damage to underground facilities

    International Nuclear Information System (INIS)

    Pratt, H.R.; Stephenson, D.E.; Zandt, G.; Bouchon, M.; Hustrulid, W.A.

    1980-01-01

    In order to assess the seismic risk for an underground facility, a data base was established and analyzed to evaluate the potential for seismic disturbance. Substantial damage to underground facilities is usually the result of displacements primarily along pre-existing faults and fractures, or at the surface entrance to these facilities. Evidence of this comes from both earthquakes and large explosions. Therefore, the displacement due to earthquakes as a function of depth is important in the evaluation of the hazard to underground facilities. To evaluate potential displacements due to seismic effects of block motions along pre-existing or induced fractures, the displacement fields surrounding two types of faults were investigated. Analytical models were used to determine relative displacements of shafts and near-surface displacement of large rock masses. Numerical methods were used to determine the displacement fields associated with pure strike-slip and vertical normal faults. Results are presented as displacements for various fault lengths as a function of depth and distance. This provides input to determine potential displacements in terms of depth and distance for underground facilities, important for assessing potential sites and design parameters

  16. Use of earthquake experience data

    International Nuclear Information System (INIS)

    Eder, S.J.; Eli, M.W.

    1991-01-01

    At many of the older existing US Department of Energy (DOE) facilities, the need has arisen for evaluation guidelines for natural phenomena hazard assessment. The effect of a design basis earthquake at most of these facilities is one of the main concerns. Earthquake experience data can provide a basis for the needed seismic evaluation guidelines, resulting in an efficient screening evaluation methodology for several of the items that are in the scope of the DOE facility reviews. The experience-based screening evaluation methodology, when properly established and implemented by trained engineers, has proven to result in sufficient safety margins and focuses on real concerns via facility walkdowns, usually at costs much less than the alternative options of analysis and testing. This paper summarizes a program that is being put into place to establish uniform seismic evaluation guidelines and criteria for evaluation of existing DOE facilities. The intent of the program is to maximize use of past experience, in conjunction with a walkdown screening evaluation process

  17. Exposure to rapid succession disasters: a study of residents at the epicenter of the Chilean Bío Bío earthquake.

    Science.gov (United States)

    Garfin, Dana Rose; Silver, Roxane Cohen; Ugalde, Francisco Javier; Linn, Heiko; Inostroza, Manuel

    2014-08-01

    We examined cumulative and specific types of trauma exposure as predictors of distress and impairment following a multifaceted community disaster. Approximately 3 months after the 8.8 magnitude earthquake, tsunami, and subsequent looting in Bío Bío, Chile, face-to-face interviews were conducted in 5 provinces closest to the epicenter. Participants (N = 1,000) were randomly selected using military topographic records and census data. Demographics, exposure to discrete components of the disaster (earthquake, tsunami, looting), and exposure to secondary stressors (property loss, injury, death) were evaluated as predictors of posttraumatic stress (PTS) symptoms, global distress, and functional impairment. Prevalence of probable posttraumatic stress disorder was 18.95%. In adjusted models examining specificity of exposure to discrete disaster components and secondary stressors, PTS symptoms and global distress were associated with earthquake intensity, tsunami exposure, and injury to self/close other. Increased functional impairment correlated with earthquake intensity and injury to self/close other. In adjusted models, cumulative exposure to secondary stressors correlated with PTS symptoms, global distress, and functional impairment; cumulative count of exposure to discrete disaster components did not. Exploratory analyses indicated that, beyond direct exposure, appraising the tsunami and looting as the worst components of the disaster correlated with greater media exposure and higher socioeconomic status, respectively. Overall, threat to life indicators correlated with worse outcomes. As failure of government tsunami warnings resulted in many deaths, findings suggest disasters compounded by human errors may be particularly distressing. We advance theory regarding cumulative and specific trauma exposure as predictors of postdisaster distress and provide information for enhancing targeted postdisaster interventions. (c) 2014 APA, all rights reserved.

  18. Probabilistic approach to earthquake prediction.

    Directory of Open Access Journals (Sweden)

    G. D'Addezio

    2002-06-01

    Full Text Available The evaluation of any earthquake forecast hypothesis requires the application of rigorous statistical methods. It implies a univocal definition of the model characterising the concerned anomaly or precursor, so as it can be objectively recognised in any circumstance and by any observer.A valid forecast hypothesis is expected to maximise successes and minimise false alarms. The probability gain associated to a precursor is also a popular way to estimate the quality of the predictions based on such precursor. Some scientists make use of a statistical approach based on the computation of the likelihood of an observed realisation of seismic events, and on the comparison of the likelihood obtained under different hypotheses. This method can be extended to algorithms that allow the computation of the density distribution of the conditional probability of earthquake occurrence in space, time and magnitude. Whatever method is chosen for building up a new hypothesis, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this test could, however, be problematic for seismicity characterised by long-term recurrence intervals. Even using the historical record, that may span time windows extremely variable between a few centuries to a few millennia, we have a low probability to catch more than one or two events on the same fault. Extending the record of earthquakes of the past back in time up to several millennia, paleoseismology represents a great opportunity to study how earthquakes recur through time and thus provide innovative contributions to time-dependent seismic hazard assessment. Sets of paleoseimologically dated earthquakes have been established for some faults in the Mediterranean area: the Irpinia fault in Southern Italy, the Fucino fault in Central Italy, the El Asnam fault in Algeria and the Skinos fault in Central Greece. By using the age of the

  19. Mismatch or cumulative stress : Toward an integrated hypothesis of programming effects

    NARCIS (Netherlands)

    Nederhof, Esther; Schmidt, Mathias V.

    2012-01-01

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the

  20. 33 CFR 222.4 - Reporting earthquake effects.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Reporting earthquake effects. 222..., DEPARTMENT OF DEFENSE ENGINEERING AND DESIGN § 222.4 Reporting earthquake effects. (a) Purpose. This... significant earthquakes. It primarily concerns damage surveys following the occurrences of earthquakes. (b...

  1. Earthquakes

    Science.gov (United States)

    ... Extreme Heat Older Adults (Aged 65+) Infants and Children Chronic Medical Conditions Low Income Athletes Outdoor Workers Pets Hot Weather Tips Warning Signs and Symptoms FAQs Social Media How to Stay Cool Missouri Cooling Centers Extreme ...

  2. Science and societal partnerships to address cumulative impacts

    Directory of Open Access Journals (Sweden)

    Carolyn J Lundquist

    2016-02-01

    Full Text Available Funding and priorities for ocean research are not separate from the underlying sociological, economic, and political landscapes that determine values attributed to ecological systems. Here we present a variation on science prioritisation exercises, focussing on inter-disciplinary research questions with the objective of shifting broad scale management practices to better address cumulative impacts and multiple users. Marine scientists in New Zealand from a broad range of scientific and social-scientific backgrounds ranked 48 statements of research priorities. At a follow up workshop, participants discussed five over-arching themes based on survey results. These themes were used to develop mechanisms to increase the relevance and efficiency of scientific research while acknowledging socio-economic and political drivers of research agendas in New Zealand’s ocean ecosystems. Overarching messages included the need to: 1 determine the conditions under which ‘surprises’ (sudden and substantive undesirable changes are likely to occur and the socio-ecological implications of such changes; 2 develop methodologies to reveal the complex and cumulative effects of change in marine systems, and their implications for resource use, stewardship, and restoration; 3 assess potential solutions to management issues that balance long-term and short-term benefits and encompass societal engagement in decision-making; 4 establish effective and appropriately resourced institutional networks to foster collaborative, solution-focused marine science; and 5 establish cross-disciplinary dialogues to translate diverse scientific and social-scientific knowledge into innovative regulatory, social and economic practice. In the face of multiple uses and cumulative stressors, ocean management frameworks must be adapted to build a collaborative framework across science, governance and society that can help stakeholders navigate uncertainties and socio-ecological surprises.

  3. Cumulative risk hypothesis: Predicting and preventing child maltreatment recidivism.

    Science.gov (United States)

    Solomon, David; Åsberg, Kia; Peer, Samuel; Prince, Gwendolyn

    2016-08-01

    Although Child Protective Services (CPS) and other child welfare agencies aim to prevent further maltreatment in cases of child abuse and neglect, recidivism is common. Having a better understanding of recidivism predictors could aid in preventing additional instances of maltreatment. A previous study identified two CPS interventions that predicted recidivism: psychotherapy for the parent, which was related to a reduced risk of recidivism, and temporary removal of the child from the parent's custody, which was related to an increased recidivism risk. However, counter to expectations, this previous study did not identify any other specific risk factors related to maltreatment recidivism. For the current study, it was hypothesized that (a) cumulative risk (i.e., the total number of risk factors) would significantly predict maltreatment recidivism above and beyond intervention variables in a sample of CPS case files and that (b) therapy for the parent would be related to a reduced likelihood of recidivism. Because it was believed that the relation between temporary removal of a child from the parent's custody and maltreatment recidivism is explained by cumulative risk, the study also hypothesized that that the relation between temporary removal of the child from the parent's custody and recidivism would be mediated by cumulative risk. After performing a hierarchical logistic regression analysis, the first two hypotheses were supported, and an additional predictor, psychotherapy for the child, also was related to reduced chances of recidivism. However, Hypothesis 3 was not supported, as risk did not significantly mediate the relation between temporary removal and recidivism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Cumulative or delayed nephrotoxicity after cisplatin (DDP) treatment.

    Science.gov (United States)

    Pinnarò, P; Ruggeri, E M; Carlini, P; Giovannelli, M; Cognetti, F

    1986-04-30

    The present retrospective study reports data regarding renal toxicity in 115 patients (63 males, 52 females; median age, 56 years) who received cumulative doses of cisplatin (DDP) greater than or equal to 200 mg/m2. DDP was administered alone or in combination at a dose of 50-70 mg/m2 in 91 patients, and at a dose of 100 mg/m2 in 22 patients. Two patients after progression of ovarian carcinoma treated with conventional doses of DDP received 4 and 2 courses, respectively, of high-dose DDP (40 mg/m2 for 5 days) in hypertonic saline. The median number of DDP courses was 6 (range 2-14), and the median cumulative dose was 350 mg/m2 (range, 200-1200). Serum creatinine and urea nitrogen were determined before initiating the treatment and again 13-16 days after each administration. The incidence of azotemia (creatinina levels that exceeded 1.5 mg/dl) was similar before (7.8%) and after (6.1%) DDP doses of 200 mg/m2. Azotemia appears to be related to the association of DDP with other potentially nephrotoxic antineoplastic drugs (methotrexate) more than to the dose per course of DDP. Of 59 patients followed for 2 months or more after discontinuing the DDP treatment, 3 (5.1%) presented creatinine values higher than 1.5 mg/dl. The data deny that the incidence of nephrotoxicity is higher in patients receiving higher cumulative doses of DDP and confirm that increases in serum creatinine levels may occur some time after discontinuation of the drug.

  5. Ionospheric precursors for crustal earthquakes in Italy

    Directory of Open Access Journals (Sweden)

    L. Perrone

    2010-04-01

    Full Text Available Crustal earthquakes with magnitude 6.0>M≥5.5 observed in Italy for the period 1979–2009 including the last one at L'Aquila on 6 April 2009 were considered to check if the earlier obtained relationships for ionospheric precursors for strong Japanese earthquakes are valid for the Italian moderate earthquakes. The ionospheric precursors are based on the observed variations of the sporadic E-layer parameters (h'Es, fbEs and foF2 at the ionospheric station Rome. Empirical dependencies for the seismo-ionospheric disturbances relating the earthquake magnitude and the epicenter distance are obtained and they have been shown to be similar to those obtained earlier for Japanese earthquakes. The dependences indicate the process of spreading the disturbance from the epicenter towards periphery during the earthquake preparation process. Large lead times for the precursor occurrence (up to 34 days for M=5.8–5.9 tells about a prolong preparation period. A possibility of using the obtained relationships for the earthquakes prediction is discussed.

  6. Smartphone MEMS accelerometers and earthquake early warning

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    The low cost MEMS accelerometers in the smartphones are attracting more and more attentions from the science community due to the vast number and potential applications in various areas. We are using the accelerometers inside the smartphones to detect the earthquakes. We did shake table tests to show these accelerometers are also suitable to record large shakings caused by earthquakes. We developed an android app - MyShake, which can even distinguish earthquake movements from daily human activities from the recordings recorded by the accelerometers in personal smartphones and upload trigger information/waveform to our server for further analysis. The data from these smartphones forms a unique datasets for seismological applications, such as earthquake early warning. In this talk I will layout the method we used to recognize earthquake-like movement from single smartphone, and the overview of the whole system that harness the information from a network of smartphones for rapid earthquake detection. This type of system can be easily deployed and scaled up around the global and provides additional insights of the earthquake hazards.

  7. Stress triggering and the Canterbury earthquake sequence

    Science.gov (United States)

    Steacy, Sandy; Jiménez, Abigail; Holden, Caroline

    2014-01-01

    The Canterbury earthquake sequence, which includes the devastating Christchurch event of 2011 February, has to date led to losses of around 40 billion NZ dollars. The location and severity of the earthquakes was a surprise to most inhabitants as the seismic hazard model was dominated by an expected Mw > 8 earthquake on the Alpine fault and an Mw 7.5 earthquake on the Porters Pass fault, 150 and 80 km to the west of Christchurch. The sequence to date has included an Mw = 7.1 earthquake and 3 Mw ≥ 5.9 events which migrated from west to east. Here we investigate whether the later events are consistent with stress triggering and whether a simple stress map produced shortly after the first earthquake would have accurately indicated the regions where the subsequent activity occurred. We find that 100 per cent of M > 5.5 earthquakes occurred in positive stress areas computed using a slip model for the first event that was available within 10 d of its occurrence. We further find that the stress changes at the starting points of major slip patches of post-Darfield main events are consistent with triggering although this is not always true at the hypocentral locations. Our results suggest that Coulomb stress changes contributed to the evolution of the Canterbury sequence and we note additional areas of increased stress in the Christchurch region and on the Porters Pass fault.

  8. Strong motion duration and earthquake magnitude relationships

    International Nuclear Information System (INIS)

    Salmon, M.W.; Short, S.A.; Kennedy, R.P.

    1992-06-01

    Earthquake duration is the total time of ground shaking from the arrival of seismic waves until the return to ambient conditions. Much of this time is at relatively low shaking levels which have little effect on seismic structural response and on earthquake damage potential. As a result, a parameter termed ''strong motion duration'' has been defined by a number of investigators to be used for the purpose of evaluating seismic response and assessing the potential for structural damage due to earthquakes. This report presents methods for determining strong motion duration and a time history envelope function appropriate for various evaluation purposes, for earthquake magnitude and distance, and for site soil properties. There are numerous definitions of strong motion duration. For most of these definitions, empirical studies have been completed which relate duration to earthquake magnitude and distance and to site soil properties. Each of these definitions recognizes that only the portion of an earthquake record which has sufficiently high acceleration amplitude, energy content, or some other parameters significantly affects seismic response. Studies have been performed which indicate that the portion of an earthquake record in which the power (average rate of energy input) is maximum correlates most closely with potential damage to stiff nuclear power plant structures. Hence, this report will concentrate on energy based strong motion duration definitions

  9. The Road to Total Earthquake Safety

    Science.gov (United States)

    Frohlich, Cliff

    Cinna Lomnitz is possibly the most distinguished earthquake seismologist in all of Central and South America. Among many other credentials, Lomnitz has personally experienced the shaking and devastation that accompanied no fewer than five major earthquakes—Chile, 1939; Kern County, California, 1952; Chile, 1960; Caracas,Venezuela, 1967; and Mexico City, 1985. Thus he clearly has much to teach someone like myself, who has never even actually felt a real earthquake.What is this slim book? The Road to Total Earthquake Safety summarizes Lomnitz's May 1999 presentation at the Seventh Mallet-Milne Lecture, sponsored by the Society for Earthquake and Civil Engineering Dynamics. His arguments are motivated by the damage that occurred in three earthquakes—Mexico City, 1985; Loma Prieta, California, 1989; and Kobe, Japan, 1995. All three quakes occurred in regions where earthquakes are common. Yet in all three some of the worst damage occurred in structures located a significant distance from the epicenter and engineered specifically to resist earthquakes. Some of the damage also indicated that the structures failed because they had experienced considerable rotational or twisting motion. Clearly, Lomnitz argues, there must be fundamental flaws in the usually accepted models explaining how earthquakes generate strong motions, and how we should design resistant structures.

  10. New geological perspectives on earthquake recurrence models

    International Nuclear Information System (INIS)

    Schwartz, D.P.

    1997-01-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release

  11. Foreshocks and aftershocks of strong earthquakes in the light of catastrophe theory

    International Nuclear Information System (INIS)

    Guglielmi, A V

    2015-01-01

    In this review, general ideas and specific results from catastrophe theory and the theory of critical phenomena are applied to the analysis of strong earthquakes. Aspects given particular attention are the sharp rise in the fluctuation level, the increased reactivity of dynamical systems in the near-threshold region, and other anomalous phenomena similar to critical opalescence. Given the lack of a sufficiently complete theory of earthquakes, this appears to be a valid approach to the analysis of observations. The study performed brought out some nontrivial properties of a strong-earthquake source that manifest themselves both before and after the main rupture discontinuity forms at the mainshock. In the course of the analysis of the foreshocks and aftershocks, such concepts as the round-the-world seismic echo, the cumulative effect of converging surface waves on the epicentral zone, and global seismicity modulation by Earth's free oscillations are introduced. Further research in this field is likely to be interesting and promising. (methodological notes)

  12. The proportional odds cumulative incidence model for competing risks

    DEFF Research Database (Denmark)

    Eriksson, Frank; Li, Jianing; Scheike, Thomas

    2015-01-01

    We suggest an estimator for the proportional odds cumulative incidence model for competing risks data. The key advantage of this model is that the regression parameters have the simple and useful odds ratio interpretation. The model has been considered by many authors, but it is rarely used...... in practice due to the lack of reliable estimation procedures. We suggest such procedures and show that their performance improve considerably on existing methods. We also suggest a goodness-of-fit test for the proportional odds assumption. We derive the large sample properties and provide estimators...

  13. Cumulative exposure to phthalates from phthalate-containing drug products

    DEFF Research Database (Denmark)

    Ennis, Zandra Nymand; Broe, Anne; Pottegård, Anton

    2018-01-01

    European regulatory limit of exposure ranging between 380-1710 mg/year throughout the study period. Lithium-products constituted the majority of dibutyl phthalate exposure. Diethyl phthalate exposure, mainly caused by erythromycin, theophylline and diclofenac products, did not exceed the EMA regulatory...... to quantify annual cumulated phthalate exposure from drug products among users of phthalate-containing oral medications in Denmark throughout the period of 2004-2016. METHODS: We conducted a Danish nationwide cohort study using The Danish National Prescription Registry and an internal database held...

  14. Lyapunov exponent of the random frequency oscillator: cumulant expansion approach

    International Nuclear Information System (INIS)

    Anteneodo, C; Vallejos, R O

    2010-01-01

    We consider a one-dimensional harmonic oscillator with a random frequency, focusing on both the standard and the generalized Lyapunov exponents, λ and λ* respectively. We discuss the numerical difficulties that arise in the numerical calculation of λ* in the case of strong intermittency. When the frequency corresponds to a Ornstein-Uhlenbeck process, we compute analytically λ* by using a cumulant expansion including up to the fourth order. Connections with the problem of finding an analytical estimate for the largest Lyapunov exponent of a many-body system with smooth interactions are discussed.

  15. Exact probability distribution function for the volatility of cumulative production

    Science.gov (United States)

    Zadourian, Rubina; Klümper, Andreas

    2018-04-01

    In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.

  16. Numerical simulation of explosive magnetic cumulative generator EMG-720

    Energy Technology Data Exchange (ETDEWEB)

    Deryugin, Yu N; Zelenskij, D K; Kazakova, I F; Kargin, V I; Mironychev, P V; Pikar, A S; Popkov, N F; Ryaslov, E A; Ryzhatskova, E G [All-Russian Research Inst. of Experimental Physics, Sarov (Russian Federation)

    1997-12-31

    The paper discusses the methods and results of numerical simulations used in the development of a helical-coaxial explosive magnetic cumulative generator (EMG) with the stator up to 720 mm in diameter. In the process of designing, separate units were numerically modeled, as was the generator operation with a constant inductive-ohmic load. The 2-D processes of the armature acceleration by the explosion products were modeled as well as those of the formation of the sliding high-current contact between the armature and stator`s insulated turns. The problem of the armature integrity in the region of the detonation waves collision was numerically analyzed. 8 figs., 2 refs.

  17. Cumulative exergy losses associated with the production of lead metal

    Energy Technology Data Exchange (ETDEWEB)

    Szargut, J [Technical Univ. of Silesia, Gliwice (PL). Inst. of Thermal-Engineering; Morris, D R [New Brunswick Univ., Fredericton, NB (Canada). Dept. of Chemical Engineering

    1990-08-01

    Cumulative exergy losses result from the irreversibility of the links of a technological network leading from raw materials and fuels extracted from nature to the product under consideration. The sum of these losses can be apportioned into partial exergy losses (associated with particular links of the technological network) or into constituent exergy losses (associated with constituent subprocesses of the network). The methods of calculation of the partial and constituent exergy losses are presented, taking into account the useful byproducts substituting the major products of other processes. Analyses of partial and constituent exergy losses are made for the technological network of lead metal production. (author).

  18. A 'new generation' earthquake catalogue

    Directory of Open Access Journals (Sweden)

    E. Boschi

    2000-06-01

    Full Text Available In 1995, we published the first release of the Catalogo dei Forti Terremoti in Italia, 461 a.C. - 1980, in Italian (Boschi et al., 1995. Two years later this was followed by a second release, again in Italian, that included more earthquakes, more accurate research and a longer time span (461 B.C. to 1990 (Boschi et al., 1997. Aware that the record of Italian historical seismicity is probably the most extensive of the whole world, and hence that our catalogue could be of interest for a wider interna-tional readership, Italian was clearly not the appropriate language to share this experience with colleagues from foreign countries. Three years after publication of the second release therefore, and after much additional research and fine tuning of methodologies and algorithms, I am proud to introduce this third release in English. All the tools and accessories have been translated along with the texts describing the development of the underlying research strategies and current contents. The English title is Catalogue of Strong Italian Earthquakes, 461 B.C. to 1997. This Preface briefly describes the scientific context within which the Catalogue of Strong Italian Earthquakes was conceived and progressively developed. The catalogue is perhaps the most impor-tant outcome of a well-established joint project between the Istituto Nazionale di Geofisica, the leading Italian institute for basic and applied research in seismology and solid earth geophysics, and SGA (Storia Geofisica Ambiente, a private firm specialising in the historical investigation and systematisation of natural phenomena. In her contribution "Method of investigation, typology and taxonomy of the basic data: navigating between seismic effects and historical contexts", Emanuela Guidoboni outlines the general framework of modern historical seismology, its complex relation with instrumental seismology on the one hand and historical research on the other. This presentation also highlights

  19. POST Earthquake Debris Management — AN Overview

    Science.gov (United States)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  20. POST Earthquake Debris Management - AN Overview

    Science.gov (United States)

    Sarkar, Raju

    Every year natural disasters, such as fires, floods, earthquakes, hurricanes, landslides, tsunami, and tornadoes, challenge various communities of the world. Earthquakes strike with varying degrees of severity and pose both short- and long-term challenges to public service providers. Earthquakes generate shock waves and displace the ground along fault lines. These seismic forces can bring down buildings and bridges in a localized area and damage buildings and other structures in a far wider area. Secondary damage from fires, explosions, and localized flooding from broken water pipes can increase the amount of debris. Earthquake debris includes building materials, personal property, and sediment from landslides. The management of this debris, as well as the waste generated during the reconstruction works, can place significant challenges on the national and local capacities. Debris removal is a major component of every post earthquake recovery operation. Much of the debris generated from earthquake is not hazardous. Soil, building material, and green waste, such as trees and shrubs, make up most of the volume of earthquake debris. These wastes not only create significant health problems and a very unpleasant living environment if not disposed of safely and appropriately, but also can subsequently impose economical burdens on the reconstruction phase. In practice, most of the debris may be either disposed of at landfill sites, reused as materials for construction or recycled into useful commodities Therefore, the debris clearance operation should focus on the geotechnical engineering approach as an important post earthquake issue to control the quality of the incoming flow of potential soil materials. In this paper, the importance of an emergency management perspective in this geotechnical approach that takes into account the different criteria related to the operation execution is proposed by highlighting the key issues concerning the handling of the construction

  1. Automatic Earthquake Detection by Active Learning

    Science.gov (United States)

    Bergen, K.; Beroza, G. C.

    2017-12-01

    In recent years, advances in machine learning have transformed fields such as image recognition, natural language processing and recommender systems. Many of these performance gains have relied on the availability of large, labeled data sets to train high-accuracy models; labeled data sets are those for which each sample includes a target class label, such as waveforms tagged as either earthquakes or noise. Earthquake seismologists are increasingly leveraging machine learning and data mining techniques to detect and analyze weak earthquake signals in large seismic data sets. One of the challenges in applying machine learning to seismic data sets is the limited labeled data problem; learning algorithms need to be given examples of earthquake waveforms, but the number of known events, taken from earthquake catalogs, may be insufficient to build an accurate detector. Furthermore, earthquake catalogs are known to be incomplete, resulting in training data that may be biased towards larger events and contain inaccurate labels. This challenge is compounded by the class imbalance problem; the events of interest, earthquakes, are infrequent relative to noise in continuous data sets, and many learning algorithms perform poorly on rare classes. In this work, we investigate the use of active learning for automatic earthquake detection. Active learning is a type of semi-supervised machine learning that uses a human-in-the-loop approach to strategically supplement a small initial training set. The learning algorithm incorporates domain expertise through interaction between a human expert and the algorithm, with the algorithm actively posing queries to the user to improve detection performance. We demonstrate the potential of active machine learning to improve earthquake detection performance with limited available training data.

  2. Seasonal Modulation of Earthquake Swarm Activity Near Maupin, Oregon

    Science.gov (United States)

    Braunmiller, J.; Nabelek, J.; Trehu, A. M.

    2012-12-01

    Between December 2006 and November 2011, the Pacific Northwest Seismic Network (PNSN) reported 464 earthquakes in a swarm about 60 km east-southeast of Mt. Hood near the town of Maupin, Oregon. Relocation of forty-five MD≥2.5 earthquakes and regional moment tensor analysis of nine 3.3≤Mw≤3.9 earthquakes reveals a north-northwest trending, less than 1 km2 sized active fault patch on a 70° west dipping fault. At about 17 km depth, the swarm occurred at or close to the bottom of the seismogenic crust. The swarm's cumulative seismic moment release, equivalent to an Mw=4.4 earthquake, is not dominated by a single shock; it is rather mainly due to 20 MD≥3.0 events, which occurred throughout the swarm. The swarm started at the southern end and, during the first 18 months of activity, migrated to the northwest at a rate of about 1-2 m/d until reaching its northern terminus. A 10° fault bend, inferred from locations and fault plane solutions, acted as geometrical barrier that temporarily halted event migration in mid-2007 before continuing north in early 2008. The slow event migration points to a pore pressure diffusion process suggesting the swarm onset was triggered by fluid inflow into the fault zone. At 17 km depth, triggering by meteoritic water seems unlikely for a normal crustal permeability. The double couple source mechanisms preclude a magmatic intrusion at the depth of the earthquakes. However, fluids (or gases) associated with a deeper, though undocumented, magma injection beneath the Cascade Mountains, could trigger seismicity in a pre-stressed region when they have migrated upward and reached the seismogenic crust. Superimposed on overall swarm evolution, we found a statistically significant annual seismicity variation, which is likely surface driven. The annual seismicity peak during spring (March-May) coincides with the maximum snow load on the near-by Cascades. The load corresponds to a surface pressure variation of about 6 kPa, which likely

  3. The relationship between earthquake exposure and posttraumatic stress disorder in 2013 Lushan earthquake

    Science.gov (United States)

    Wang, Yan; Lu, Yi

    2018-01-01

    The objective of this study is to explore the relationship between earthquake exposure and the incidence of PTSD. A stratification random sample survey was conducted to collect data in the Longmenshan thrust fault after Lushan earthquake three years. We used the Children's Revised Impact of Event Scale (CRIES-13) and the Earthquake Experience Scale. Subjects in this study included 3944 school student survivors in local eleven schools. The prevalence of probable PTSD is relatively higher, when the people was trapped in the earthquake, was injured in the earthquake or have relatives who died in the earthquake. It concluded that researchers need to pay more attention to the children and adolescents. The government should pay more attention to these people and provide more economic support.

  4. Expansion formulae for characteristics of cumulative cost in finite horizon production models

    NARCIS (Netherlands)

    Ayhan, H.; Schlegel, S.

    2001-01-01

    We consider the expected value and the tail probability of cumulative shortage and holding cost (i.e. the probability that cumulative cost is more than a certain value) in finite horizon production models. An exact expression is provided for the expected value of the cumulative cost for general

  5. Sense of Community and Depressive Symptoms among Older Earthquake Survivors Following the 2008 Earthquake in Chengdu China

    Science.gov (United States)

    Li, Yawen; Sun, Fei; He, Xusong; Chan, Kin Sun

    2011-01-01

    This study examined the impact of an earthquake as well as the role of sense of community as a protective factor against depressive symptoms among older Chinese adults who survived an 8.0 magnitude earthquake in 2008. A household survey of a random sample was conducted 3 months after the earthquake and 298 older earthquake survivors participated…

  6. Sismosima: A pioneer project for earthquake detection

    International Nuclear Information System (INIS)

    Echague, C. de

    2015-01-01

    Currently you can only study how earthquakes occur and minimizing their consequences, but in Sismosima are studied earthquakes for if possible issue a pre-alert. Geological and Mining Institute of Spain (IGME) launched this project that has already achieved in test the caves in which you installed meters an increase of carbon dioxide (CO 2 ) that match the shot earthquake. Now, it remains check if gas emission occurs simultaneously, before or after. If were before, a couple of minutes would be enough to give an early warning with which save lives and ensure facilities. (Author)

  7. ASSESSMENT OF EARTHQUAKE HAZARDS ON WASTE LANDFILLS

    DEFF Research Database (Denmark)

    Zania, Varvara; Tsompanakis, Yiannis; Psarropoulos, Prodromos

    Earthquake hazards may arise as a result of: (a) transient ground deformation, which is induced due to seismic wave propagation, and (b) permanent ground deformation, which is caused by abrupt fault dislocation. Since the adequate performance of waste landfills after an earthquake is of outmost...... importance, the current study examines the impact of both types of earthquake hazards by performing efficient finite-element analyses. These took also into account the potential slip displacement development along the geosynthetic interfaces of the composite base liner. At first, the development of permanent...

  8. Earthquake free design of pipe lines

    International Nuclear Information System (INIS)

    Kurihara, Chizuko; Sakurai, Akio

    1974-01-01

    Long structures such as cooling sea water pipe lines of nuclear power plants have a wide range of extent along the ground surface, and are incurred by not only the inertia forces but also forces due to ground deformations or the seismic wave propagation during earthquakes. Since previous reports indicated the earthquake free design of underground pipe lines, it is discussed in this report on behaviors of pipe lines on the ground during earthquakes and is proposed the aseismic design of pipe lines considering the effects of both inertia forces and ground deformations. (author)

  9. Earthquake response observation of isolated buildings

    International Nuclear Information System (INIS)

    Harada, O.; Kawai, N.; Ishii, T.; Sawada, Y.; Shiojiri, H.; Mazda, T.

    1989-01-01

    Base isolation system is expected to be a technology for a rational design of FBR plant. In order to apply this system to important structures, accumulation of verification data is necessary. From this point of view, the vibration test and the earthquake response observation of the actual isolated building using laminated rubber bearings and elasto-plastic steel dampers were conducted for the purpose of investigating its dynamic behavior and of proving the reliability of the base isolation system. Since September in 1986, more than thirty earthquakes have been observed. This paper presents the results of the earthquake response observation

  10. Earthquake prediction with electromagnetic phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Hayakawa, Masashi, E-mail: hayakawa@hi-seismo-em.jp [Hayakawa Institute of Seismo Electomagnetics, Co. Ltd., University of Electro-Communications (UEC) Incubation Center, 1-5-1 Chofugaoka, Chofu Tokyo, 182-8585 (Japan); Advanced Wireless & Communications Research Center, UEC, Chofu Tokyo (Japan); Earthquake Analysis Laboratory, Information Systems Inc., 4-8-15, Minami-aoyama, Minato-ku, Tokyo, 107-0062 (Japan); Fuji Security Systems. Co. Ltd., Iwato-cho 1, Shinjyuku-ku, Tokyo (Japan)

    2016-02-01

    Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQs prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.

  11. Modified Mercalli intensities for some recent California earthquakes and historic San Francisco Bay Region earthquakes

    Science.gov (United States)

    Bakun, William H.

    1998-01-01

    Modified Mercalli Intensity (MMI) data for recent California earthquakes were used by Bakun and Wentworth (1997) to develop a strategy for bounding the location and moment magnitude M of earthquakes from MMI observations only. Bakun (Bull. Seismol. Soc. Amer., submitted) used the Bakun and Wentworth (1997) strategy to analyze 19th century and early 20th century San Francisco Bay Region earthquakes. The MMI data and site corrections used in these studies are listed in this Open-file Report. 

  12. The Macroseismic Intensity Distribution of the 30 October 2016 Earthquake in Central Italy (Mw 6.6): Seismotectonic Implications

    Science.gov (United States)

    Galli, Paolo; Castenetto, Sergio; Peronace, Edoardo

    2017-10-01

    The central Italy Apennines were rocket in 2016 by the strongest earthquakes of the past 35 years. Two main shocks (Mw 6.2 and Mw 6.6) between the end of August and October caused the death of almost 300 people, and the destruction of 50 villages and small towns scattered along 40 km in the hanging wall of the N165° striking Mount Vettore fault system, that is, the structure responsible for the earthquakes. The 24 August southern earthquake, besides causing all the casualties, razed to the ground the small medieval town of Amatrice and dozens of hamlets around it. The 30 October main shock crushed definitely all the villages of the whole epicentral area (up to 11 intensity degree), extending northward the level of destruction and inducing heavy damage even to the 30 km far Camerino town. The survey of the macroseismic effects started the same day of the first main shock and continued during the whole seismic sequence, even during and after the strong earthquakes at the end of October, allowing the definition of a detailed picture of the damage distribution, day by day. Here we present the results of the final survey in terms of Mercalli-Cancani-Sieberg intensity, which account for the cumulative effects of the whole 2016 sequence (465 intensity data points, besides 435 related to the 24 August and 54 to the 26 October events, respectively). The distribution of the highest intensity data points evidenced the lack of any possible overlap between the 2016 earthquakes and the strongest earthquakes of the region, making this sequence a unique case in the seismic history of Italy. In turn, the cross matching with published paleoseismic data provided some interesting insights concerning the seismogenic behavior of the Mount Vettore fault in comparison with other active normal faults of the region.

  13. Cumulative hierarchies and computability over universes of sets

    Directory of Open Access Journals (Sweden)

    Domenico Cantone

    2008-05-01

    Full Text Available Various metamathematical investigations, beginning with Fraenkel’s historical proof of the independence of the axiom of choice, called for suitable definitions of hierarchical universes of sets. This led to the discovery of such important cumulative structures as the one singled out by von Neumann (generally taken as the universe of all sets and Godel’s universe of the so-called constructibles. Variants of those are exploited occasionally in studies concerning the foundations of analysis (according to Abraham Robinson’s approach, or concerning non-well-founded sets. We hence offer a systematic presentation of these many structures, partly motivated by their relevance and pervasiveness in mathematics. As we report, numerous properties of hierarchy-related notions such as rank, have been verified with the assistance of the ÆtnaNova proof-checker.Through SETL and Maple implementations of procedures which effectively handle the Ackermann’s hereditarily finite sets, we illustrate a particularly significant case among those in which the entities which form a universe of sets can be algorithmically constructed and manipulated; hereby, the fruitful bearing on pure mathematics of cumulative set hierarchies ramifies into the realms of theoretical computer science and algorithmics.

  14. Cumulative Effects Assessment: Linking Social, Ecological, and Governance Dimensions

    Directory of Open Access Journals (Sweden)

    Marian Weber

    2012-06-01

    Full Text Available Setting social, economic, and ecological objectives is ultimately a process of social choice informed by science. In this special feature we provide a multidisciplinary framework for the use of cumulative effects assessment in land use planning. Forest ecosystems are facing considerable challenges driven by population growth and increasing demands for resources. In a suite of case studies that span the boreal forest of Western Canada to the interior Atlantic forest of Paraguay we show how transparent and defensible methods for scenario analysis can be applied in data-limited regions and how social dimensions of land use change can be incorporated in these methods, particularly in aboriginal communities that have lived in these ecosystems for generations. The case studies explore how scenario analysis can be used to evaluate various land use options and highlight specific challenges with identifying social and ecological responses, determining thresholds and targets for land use, and integrating local and traditional knowledge in land use planning. Given that land use planning is ultimately a value-laden and often politically charged process we also provide some perspective on various collective and expert-based processes for identifying cumulative impacts and thresholds. The need for good science to inform and be informed by culturally appropriate democratic processes calls for well-planned and multifaceted approaches both to achieve an informed understanding of both residents and governments of the interactive and additive changes caused by development, and to design action agendas to influence such change at the ecological and social level.

  15. Maternal distress and parenting in the context of cumulative disadvantage.

    Science.gov (United States)

    Arditti, Joyce; Burton, Linda; Neeves-Botelho, Sara

    2010-06-01

    This article presents an emergent conceptual model of the features and links between cumulative disadvantage, maternal distress, and parenting practices in low-income families in which parental incarceration has occurred. The model emerged from the integration of extant conceptual and empirical research with grounded theory analysis of longitudinal ethnographic data from Welfare, Children, and Families: A Three-City Study. Fourteen exemplar family cases were used in the analysis. Results indicated that mothers in these families experienced life in the context of cumulative disadvantage, reporting a cascade of difficulties characterized by neighborhood worries, provider concerns, bureaucratic difficulties, violent intimate relationships, and the inability to meet children's needs. Mothers, however, also had an intense desire to protect their children, and to make up for past mistakes. Although, in response to high levels of maternal distress and disadvantage, most mothers exhibited harsh discipline of their children, some mothers transformed their distress by advocating for their children under difficult circumstances. Women's use of harsh discipline and advocacy was not necessarily an "either/or" phenomenon as half of the mothers included in our analysis exhibited both harsh discipline and care/advocacy behaviors. Maternal distress characterized by substance use, while connected to harsh disciplinary behavior, did not preclude mothers engaging in positive parenting behaviors.

  16. Cumulative phase delay imaging for contrast-enhanced ultrasound tomography

    International Nuclear Information System (INIS)

    Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2015-01-01

    Standard dynamic-contrast enhanced ultrasound (DCE-US) imaging detects and estimates ultrasound-contrast-agent (UCA) concentration based on the amplitude of the nonlinear (harmonic) components generated during ultrasound (US) propagation through UCAs. However, harmonic components generation is not specific to UCAs, as it also occurs for US propagating through tissue. Moreover, nonlinear artifacts affect standard DCE-US imaging, causing contrast to tissue ratio reduction, and resulting in possible misclassification of tissue and misinterpretation of UCA concentration. Furthermore, no contrast-specific modality exists for DCE-US tomography; in particular speed-of-sound changes due to UCAs are well within those caused by different tissue types. Recently, a new marker for UCAs has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental component is in fact observable for US propagating through UCAs, and is absent in tissue. In this paper, tomographic US images based on CPD are for the first time presented and compared to speed-of-sound US tomography. Results show the applicability of this marker for contrast specific US imaging, with cumulative phase delay imaging (CPDI) showing superior capabilities in detecting and localizing UCA, as compared to speed-of-sound US tomography. Cavities (filled with UCA) which were down to 1 mm in diameter were clearly detectable. Moreover, CPDI is free of the above mentioned nonlinear artifacts. These results open important possibilities to DCE-US tomography, with potential applications to breast imaging for cancer localization. (fast track communication)

  17. Cumulant expansions for measuring water exchange using diffusion MRI

    Science.gov (United States)

    Ning, Lipeng; Nilsson, Markus; Lasič, Samo; Westin, Carl-Fredrik; Rathi, Yogesh

    2018-02-01

    The rate of water exchange across cell membranes is a parameter of biological interest and can be measured by diffusion magnetic resonance imaging (dMRI). In this work, we investigate a stochastic model for the diffusion-and-exchange of water molecules. This model provides a general solution for the temporal evolution of dMRI signal using any type of gradient waveform, thereby generalizing the signal expressions for the Kärger model. Moreover, we also derive a general nth order cumulant expansion of the dMRI signal accounting for water exchange, which has not been explored in earlier studies. Based on this analytical expression, we compute the cumulant expansion for dMRI signals for the special case of single diffusion encoding (SDE) and double diffusion encoding (DDE) sequences. Our results provide a theoretical guideline on optimizing experimental parameters for SDE and DDE sequences, respectively. Moreover, we show that DDE signals are more sensitive to water exchange at short-time scale but provide less attenuation at long-time scale than SDE signals. Our theoretical analysis is also validated using Monte Carlo simulations on synthetic structures.

  18. A Cumulant-based Analysis of Nonlinear Magnetospheric Dynamics

    International Nuclear Information System (INIS)

    Johnson, Jay R.; Wing, Simon

    2004-01-01

    Understanding magnetospheric dynamics and predicting future behavior of the magnetosphere is of great practical interest because it could potentially help to avert catastrophic loss of power and communications. In order to build good predictive models it is necessary to understand the most critical nonlinear dependencies among observed plasma and electromagnetic field variables in the coupled solar wind/magnetosphere system. In this work, we apply a cumulant-based information dynamical measure to characterize the nonlinear dynamics underlying the time evolution of the Dst and Kp geomagnetic indices, given solar wind magnetic field and plasma input. We examine the underlying dynamics of the system, the temporal statistical dependencies, the degree of nonlinearity, and the rate of information loss. We find a significant solar cycle dependence in the underlying dynamics of the system with greater nonlinearity for solar minimum. The cumulant-based approach also has the advantage that it is reliable even in the case of small data sets and therefore it is possible to avoid the assumption of stationarity, which allows for a measure of predictability even when the underlying system dynamics may change character. Evaluations of several leading Kp prediction models indicate that their performances are sub-optimal during active times. We discuss possible improvements of these models based on this nonparametric approach

  19. Strategy for an assessment of cumulative ecological impacts

    International Nuclear Information System (INIS)

    Boucher, P.; Collins, J.; Nelsen, J.

    1995-01-01

    The US Department of Energy (DOE) has developed a strategy to conduct an assessment of the cumulative ecological impact of operations at the 300-square-mile Savannah River Site. This facility has over 400 identified waste units and contains several large watersheds. In addition to individual waste units, residual contamination must be evaluated in terms of its contribution to ecological risks at zonal and site-wide levels. DOE must be able to generate sufficient information to facilitate cleanup in the immediate future within the context of a site-wide ecological risk assessment that may not be completed for many years. The strategy superimposes a more global perspective on ecological assessments of individual waste units and provides strategic underpinnings for conducting individual screening-level and baseline risk assessments at the operable unit and zonal or watershed levels. It identifies ecological endpoints and risk assessment tools appropriate for each level of the risk assessment. In addition, it provides a clear mechanism for identifying clean sites through screening-level risk assessments and for elevating sites with residual contamination to the next level of assessment. Whereas screening-level and operable unit-level risk assessments relate directly to cleanup, zonal and site-wide assessments verity or confirm the overall effectiveness of remediation. The latter assessments must show, for example, whether multiple small areas with residual pesticide contamination that have minimal individual impact would pose a cumulative risk from bioaccumulation because they are within the habitat range of an ecological receptor

  20. The 1985 central chile earthquake: a repeat of previous great earthquakes in the region?

    Science.gov (United States)

    Comte, D; Eisenberg, A; Lorca, E; Pardo, M; Ponce, L; Saragoni, R; Singh, S K; Suárez, G

    1986-07-25

    A great earthquake (surface-wave magnitude, 7.8) occurred along the coast of central Chile on 3 March 1985, causing heavy damage to coastal towns. Intense foreshock activity near the epicenter of the main shock occurred for 11 days before the earthquake. The aftershocks of the 1985 earthquake define a rupture area of 170 by 110 square kilometers. The earthquake was forecast on the basis of the nearly constant repeat time (83 +/- 9 years) of great earthquakes in this region. An analysis of previous earthquakes suggests that the rupture lengths of great shocks in the region vary by a factor of about 3. The nearly constant repeat time and variable rupture lengths cannot be reconciled with time- or slip-predictable models of earthquake recurrence. The great earthquakes in the region seem to involve a variable rupture mode and yet, for unknown reasons, remain periodic. Historical data suggest that the region south of the 1985 rupture zone should now be considered a gap of high seismic potential that may rupture in a great earthquake in the next few tens of years.

  1. Perception of earthquake risk in Taiwan: effects of gender and past earthquake experience.

    Science.gov (United States)

    Kung, Yi-Wen; Chen, Sue-Huei

    2012-09-01

    This study explored how individuals in Taiwan perceive the risk of earthquake and the relationship of past earthquake experience and gender to risk perception. Participants (n= 1,405), including earthquake survivors and those in the general population without prior direct earthquake exposure, were selected and interviewed through a computer-assisted telephone interviewing procedure using a random sampling and stratification method covering all 24 regions of Taiwan. A factor analysis of the interview data yielded a two-factor structure of risk perception in regard to earthquake. The first factor, "personal impact," encompassed perception of threat and fear related to earthquakes. The second factor, "controllability," encompassed a sense of efficacy of self-protection in regard to earthquakes. The findings indicated prior earthquake survivors and females reported higher scores on the personal impact factor than males and those with no prior direct earthquake experience, although there were no group differences on the controllability factor. The findings support that risk perception has multiple components, and suggest that past experience (survivor status) and gender (female) affect the perception of risk. Exploration of potential contributions of other demographic factors such as age, education, and marital status to personal impact, especially for females and survivors, is discussed. Future research on and intervention program with regard to risk perception are suggested accordingly. © 2012 Society for Risk Analysis.

  2. Earthquake Damage Assessment Using Objective Image Segmentation: A Case Study of 2010 Haiti Earthquake

    Science.gov (United States)

    Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel

    2012-01-01

    In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.

  3. The effects of varying injection rates in Osage County, Oklahoma, on the 2016 Mw5.8 Pawnee earthquake

    Science.gov (United States)

    Barbour, Andrew J.; Norbeck, Jack H.; Rubinstein, Justin L.

    2017-01-01

    The 2016 Mw 5.8 Pawnee earthquake occurred in a region with active wastewater injection into a basal formation group. Prior to the earthquake, fluid injection rates at most wells were relatively steady, but newly collected data show significant increases in injection rate in the years leading up to earthquake. For the same time period, the total volumes of injected wastewater were roughly equivalent between variable‐rate and constant‐rate wells. To understand the possible influence of these changes in injection, we simulate the variable‐rate injection history and its constant‐rate equivalent in a layered poroelastic half‐space to explore the interplay between pore‐pressure effects and poroelastic effects on the fault leading up to the mainshock. In both cases, poroelastic stresses contribute a significant proportion of Coulomb failure stresses on the fault compared to pore‐pressure increases alone, but the resulting changes in seismicity rate, calculated using a rate‐and‐state frictional model, are many times larger when poroelastic effects are included, owing to enhanced stressing rates. In particular, the variable‐rate simulation predicts more than an order of magnitude increase in seismicity rate above background rates compared to the constant‐rate simulation with equivalent volume. The observed cumulative density of earthquakes prior to the mainshock within 10 km of the injection source exhibits remarkable agreement with seismicity predicted by the variable‐rate injection case.

  4. Holocene earthquakes of magnitude 7 during westward escape of the Olympic Mountains, Washington

    Science.gov (United States)

    Nelson, Alan R.; Personius, Stephen; Wells, Ray; Schermer, Elizabeth R.; Bradley, Lee-Ann; Buck, Jason; Reitman, Nadine G.

    2017-01-01

    The Lake Creek–Boundary Creek fault, previously mapped in Miocene bedrock as an oblique thrust on the north flank of the Olympic Mountains, poses a significant earthquake hazard. Mapping using 2015 light detection and ranging (lidar) confirms 2004 lidar mapping of postglacial (≥14  km along a splay fault, the Sadie Creek fault, west of Lake Crescent. Scarp morphology suggests repeated earthquake ruptures along the eastern section of the Lake Creek–Boundary Creek fault and the Sadie Creek fault since ∼13  ka">∼13  ka. Right‐lateral (∼11–28  m">∼11–28  m) and vertical (1–2 m) cumulative fault offsets suggest slip rates of ∼1–2  mm/yr">∼1–2  mm/yr Stratigraphic and age‐model data from five trenches perpendicular to scarps at four sites on the eastern section of the fault show evidence of 3–5 surface‐rupturing earthquakes. Near‐vertical fault dips and upward‐branching fault patterns in trenches, abrupt changes in the thickness of stratigraphic units across faults, and variations in vertical displacement of successive stratigraphic units along fault traces also suggest a large lateral component of slip. Age models suggest two earthquakes date from 1.3±0.8">1.3±0.8 and 2.9±0.6  ka">2.9±0.6  ka; evidence and ages for 2–3 earlier earthquakes are less certain. Assuming 3–5 postglacial earthquakes, lateral and vertical cumulative fault offsets yield average slip per earthquake of ∼4.6  m">∼4.6  m, a lateral‐to‐vertical slip ratio of ∼10:1">∼10:1, and a recurrence interval of 3.5±1.0  ka">3.5±1.0  ka. Empirical relations yield moment magnitude estimates of M 7.2–7.5 (slip per earthquake) and 7.1–7.3 (56 km maximum rupture length). An apparent left‐lateral Miocene to right‐lateral Holocene slip reversal on the faults is probably related to overprinting of east‐directed, accretion‐dominated deformation in the eastern core of the Olympic

  5. Increased earthquake safety through optimised mounting concept

    International Nuclear Information System (INIS)

    Kollmann, Dieter; Senechal, Holger

    2013-01-01

    Since Fukushima, there has been intensive work on earthquake safety in all nuclear power plants. A large part of these efforts aim at the earthquake safety of safety-relevant pipeline systems. The problem with earthquake safety here is not the pipeline system itself but rather its mountings and connections to components. This is precisely the topic that the KAE dealt with in years of research and development work. It has developed an algorithm that determines the optimal mounting concept with a few iteration steps depending on arbitrary combinations of loading conditions whilst maintaining compliance with relevant regulations for any pipeline systems. With this tool at hand, we are now in a position to plan and realise remedial measures accurately with minimum time and hardware expenditure, and so distinctly improve the earthquake safety of safety-relevant systems. (orig.)

  6. Coping with earthquakes induced by fluid injection

    Science.gov (United States)

    McGarr, Arthur F.; Bekins, Barbara; Burkardt, Nina; Dewey, James W.; Earle, Paul S.; Ellsworth, William L.; Ge, Shemin; Hickman, Stephen H.; Holland, Austin F.; Majer, Ernest; Rubinstein, Justin L.; Sheehan, Anne

    2015-01-01

    Large areas of the United States long considered geologically stable with little or no detected seismicity have recently become seismically active. The increase in earthquake activity began in the mid-continent starting in 2001 (1) and has continued to rise. In 2014, the rate of occurrence of earthquakes with magnitudes (M) of 3 and greater in Oklahoma exceeded that in California (see the figure). This elevated activity includes larger earthquakes, several with M > 5, that have caused significant damage (2, 3). To a large extent, the increasing rate of earthquakes in the mid-continent is due to fluid-injection activities used in modern energy production (1, 4, 5). We explore potential avenues for mitigating effects of induced seismicity. Although the United States is our focus here, Canada, China, the UK, and others confront similar problems associated with oil and gas production, whereas quakes induced by geothermal activities affect Switzerland, Germany, and others.

  7. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  8. Associating an ionospheric parameter with major earthquake ...

    Indian Academy of Sciences (India)

    ionospheric disturbance (SID) and 'td' is the dura- tion of the ... dayside of the earth, ionizing atmospheric parti- ... the increased emanation of excited radon molecules from the ground ..... tration following strong earthquake; Int. J. Remote Sens.

  9. Electrostatically actuated resonant switches for earthquake detection

    KAUST Repository

    Ramini, Abdallah H.; Masri, Karim M.; Younis, Mohammad I.

    2013-01-01

    action can be functionalized for useful functionalities, such as shutting off gas pipelines in the case of earthquakes, or can be used to activate a network of sensors for seismic activity recording in health monitoring applications. By placing a

  10. Earthquakes as Expressions of Tectonic Activity

    Indian Academy of Sciences (India)

    Sources, Types and Examples. Kusala Rajendran ... Science, Bangalore. Her research interests are mostly ... ogy, and some highlights on Indian earthquakes studies, and ..... jects, I did Applied Geophysics from the University of Roorkee.

  11. Earthquake Damping Device for Steel Frame

    Science.gov (United States)

    Zamri Ramli, Mohd; Delfy, Dezoura; Adnan, Azlan; Torman, Zaida

    2018-04-01

    Structures such as buildings, bridges and towers are prone to collapse when natural phenomena like earthquake occurred. Therefore, many design codes are reviewed and new technologies are introduced to resist earthquake energy especially on building to avoid collapse. The tuned mass damper is one of the earthquake reduction products introduced on structures to minimise the earthquake effect. This study aims to analyse the effectiveness of tuned mass damper by experimental works and finite element modelling. The comparisons are made between these two models under harmonic excitation. Based on the result, it is proven that installing tuned mass damper will reduce the dynamic response of the frame but only in several input frequencies. At the highest input frequency applied, the tuned mass damper failed to reduce the responses. In conclusion, in order to use a proper design of damper, detailed analysis must be carried out to have sufficient design based on the location of the structures with specific ground accelerations.

  12. Drinking Water Earthquake Resilience Paper Data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Data for the 9 figures contained in the paper, A SOFTWARE FRAMEWORK FOR ASSESSING THE RESILIENCE OF DRINKING WATER SYSTEMS TO DISASTERS WITH AN EXAMPLE EARTHQUAKE...

  13. Can Dams and Reservoirs Cause Earthquakes?

    Indian Academy of Sciences (India)

    induced earthquakes in that region. Figure 1. A cartoon to illus- trate the spatial relation- ships between dam, reser- ... learning experience for us graduate students. Thus, on that ... infallibility and persuasiveness as in Euclidean geometry. The.

  14. DYFI data for Induced Earthquake Studies

    Data.gov (United States)

    Department of the Interior — The significant rise in seismicity rates in Oklahoma and Kansas (OK–KS) in the last decade has led to an increased interest in studying induced earthquakes. Although...

  15. Safety and survival in an earthquake

    Science.gov (United States)

    ,

    1969-01-01

    Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

  16. Disturbances in equilibrium function after major earthquake.

    Science.gov (United States)

    Honma, Motoyasu; Endo, Nobutaka; Osada, Yoshihisa; Kim, Yoshiharu; Kuriyama, Kenichi

    2012-01-01

    Major earthquakes were followed by a large number of aftershocks and significant outbreaks of dizziness occurred over a large area. However it is unclear why major earthquake causes dizziness. We conducted an intergroup trial on equilibrium dysfunction and psychological states associated with equilibrium dysfunction in individuals exposed to repetitive aftershocks versus those who were rarely exposed. Greater equilibrium dysfunction was observed in the aftershock-exposed group under conditions without visual compensation. Equilibrium dysfunction in the aftershock-exposed group appears to have arisen from disturbance of the inner ear, as well as individual vulnerability to state anxiety enhanced by repetitive exposure to aftershocks. We indicate potential effects of autonomic stress on equilibrium function after major earthquake. Our findings may contribute to risk management of psychological and physical health after major earthquakes with aftershocks, and allow development of a new empirical approach to disaster care after such events.

  17. Shallow moonquakes - How they compare with earthquakes

    Science.gov (United States)

    Nakamura, Y.

    1980-01-01

    Of three types of moonquakes strong enough to be detectable at large distances - deep moonquakes, meteoroid impacts and shallow moonquakes - only shallow moonquakes are similar in nature to earthquakes. A comparison of various characteristics of moonquakes with those of earthquakes indeed shows a remarkable similarity between shallow moonquakes and intraplate earthquakes: (1) their occurrences are not controlled by tides; (2) they appear to occur in locations where there is evidence of structural weaknesses; (3) the relative abundances of small and large quakes (b-values) are similar, suggesting similar mechanisms; and (4) even the levels of activity may be close. The shallow moonquakes may be quite comparable in nature to intraplate earthquakes, and they may be of similar origin.

  18. Radon, gas geochemistry, groundwater, and earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    King, Chi-Yu [Power Reactor and Nuclear Fuel Development Corp., Tono Geoscience Center, Toki, Gifu (Japan)

    1998-12-31

    Radon monitoring in groundwater, soil air, and atmosphere has been continued in many seismic areas of the world for earthquake-prediction and active-fault studies. Some recent measurements of radon and other geochemical and hydrological parameters have been made for sufficiently long periods, with reliable instruments, and together with measurements of meteorological variables and solid-earth tides. The resultant data are useful in better distinguishing earthquake-related changes from various background noises. Some measurements have been carried out in areas where other geophysical measurements are being made also. Comparative studies of various kinds of geophysical data are helpful in ascertaining the reality of the earthquake-related and fault-related anomalies and in understanding the underlying mechanisms. Spatial anomalies of radon and other terrestrial gasses have been observed for many active faults. Such observations indicate that gas concentrations are very much site dependent, particularly on fault zones where terrestrial fluids may move vertically. Temporal anomalies have been reliably observed before and after some recent earthquakes, including the 1995 Kobe earthquake, and the general pattern of anomaly occurrence remains the same as observed before: They are recorded at only relatively few sensitive sites, which can be at much larger distances than expected from existing earthquake-source models. The sensitivity of a sensitive site is also found to be changeable with time. These results clearly show the inadequacy of the existing dilatancy-fluid diffusion and elastic-dislocation models for earthquake sources to explain earthquake-related geochemical and geophysical changes recorded at large distances. (J.P.N.)

  19. Exploring Earthquakes in Real-Time

    Science.gov (United States)

    Bravo, T. K.; Kafka, A. L.; Coleman, B.; Taber, J. J.

    2013-12-01

    Earthquakes capture the attention of students and inspire them to explore the Earth. Adding the ability to view and explore recordings of significant and newsworthy earthquakes in real-time makes the subject even more compelling. To address this opportunity, the Incorporated Research Institutions for Seismology (IRIS), in collaboration with Moravian College, developed ';jAmaSeis', a cross-platform application that enables students to access real-time earthquake waveform data. Students can watch as the seismic waves are recorded on their computer, and can be among the first to analyze the data from an earthquake. jAmaSeis facilitates student centered investigations of seismological concepts using either a low-cost educational seismograph or streamed data from other educational seismographs or from any seismic station that sends data to the IRIS Data Management System. After an earthquake, students can analyze the seismograms to determine characteristics of earthquakes such as time of occurrence, distance from the epicenter to the station, magnitude, and location. The software has been designed to provide graphical clues to guide students in the analysis and assist in their interpretations. Since jAmaSeis can simultaneously record up to three stations from anywhere on the planet, there are numerous opportunities for student driven investigations. For example, students can explore differences in the seismograms from different distances from an earthquake and compare waveforms from different azimuthal directions. Students can simultaneously monitor seismicity at a tectonic plate boundary and in the middle of the plate regardless of their school location. This can help students discover for themselves the ideas underlying seismic wave propagation, regional earthquake hazards, magnitude-frequency relationships, and the details of plate tectonics. The real-time nature of the data keeps the investigations dynamic, and offers students countless opportunities to explore.

  20. Catalog of Hawaiian earthquakes, 1823-1959

    Science.gov (United States)

    Klein, Fred W.; Wright, Thomas L.

    2000-01-01

    This catalog of more than 17,000 Hawaiian earthquakes (of magnitude greater than or equal to 5), principally located on the Island of Hawaii, from 1823 through the third quarter of 1959 is designed to expand our ability to evaluate seismic hazard in Hawaii, as well as our knowledge of Hawaiian seismic rhythms as they relate to eruption cycles at Kilauea and Mauna Loa volcanoes and to subcrustal earthquake patterns related to the tectonic evolution of the Hawaiian chain.

  1. Reliability of Soil Sublayers Under Earthquake Excitation

    DEFF Research Database (Denmark)

    Nielsen, Søren R. K.; Mørk, Kim Jørgensen

    A hysteretic model is formulated for a multi-layer subsoil subjected to horizontal earthquake shear waves (SH-waves). For each layer a modified Bouc-Wen model is used, relating the increments of the hysteretic shear stress to increments of the shear strain of the layer. Liquefaction is considered...... for each layer. The horizontal earthquake acceleration process at bedrock level is modelled as a non-stationary white noise, filtered through a time-invariant linear second order filter....

  2. The Christchurch earthquake stroke incidence study.

    Science.gov (United States)

    Wu, Teddy Y; Cheung, Jeanette; Cole, David; Fink, John N

    2014-03-01

    We examined the impact of major earthquakes on acute stroke admissions by a retrospective review of stroke admissions in the 6 weeks following the 4 September 2010 and 22 February 2011 earthquakes. The control period was the corresponding 6 weeks in the previous year. In the 6 weeks following the September 2010 earthquake there were 97 acute stroke admissions, with 79 (81.4%) ischaemic infarctions. This was similar to the 2009 control period which had 104 acute stroke admissions, of whom 80 (76.9%) had ischaemic infarction. In the 6 weeks following the February 2011 earthquake, there were 71 stroke admissions, and 61 (79.2%) were ischaemic infarction. This was less than the 96 strokes (72 [75%] ischaemic infarction) in the corresponding control period. None of the comparisons were statistically significant. There was also no difference in the rate of cardioembolic infarction from atrial fibrillation between the study periods. Patients admitted during the February 2011 earthquake period were less likely to be discharged directly home when compared to the control period (31.2% versus 46.9%, p=0.036). There was no observable trend in the number of weekly stroke admissions between the 2 weeks leading to and 6 weeks following the earthquakes. Our results suggest that severe psychological stress from earthquakes did not influence the subsequent short term risk of acute stroke, but the severity of the earthquake in February 2011 and associated civil structural damages may have influenced the pattern of discharge for stroke patients. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Collaboratory for the Study of Earthquake Predictability

    Science.gov (United States)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  4. Mexican Earthquakes and Tsunamis Catalog Reviewed

    Science.gov (United States)

    Ramirez-Herrera, M. T.; Castillo-Aja, R.

    2015-12-01

    Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and

  5. Simultaneous estimation of earthquake source parameters and ...

    Indian Academy of Sciences (India)

    moderate-size aftershocks (Mw 2.1–5.1) of the Mw 7.7 2001 Bhuj earthquake. The horizontal- ... claimed a death toll of 20,000 people. This earth- .... quake occurred west of Kachchh, with an epicenter at 24. ◦. N, 68 ..... for dominance of body waves for R ≤ 100 km ...... Bhuj earthquake sequence; J. Asian Earth Sci. 40.

  6. Natural Gas Extraction, Earthquakes and House Prices

    OpenAIRE

    Hans R.A. Koster; Jos N. van Ommeren

    2015-01-01

    The production of natural gas is strongly increasing around the world. Long-run negative external effects of extraction are understudied and often ignored in social) cost-benefit analyses. One important example is that natural gas extraction leads to soil subsidence and subsequent induced earthquakes that may occur only after a couple of decades. We show that induced earthquakes that are noticeable to residents generate substantial non-monetary economic effects, as measured by their effects o...

  7. Earthquake risk assessment of Alexandria, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Gaber, Hanan; Ibrahim, Hamza

    2015-01-01

    Throughout historical and recent times, Alexandria has suffered great damage due to earthquakes from both near- and far-field sources. Sometimes, the sources of such damages are not well known. During the twentieth century, the city was shaken by several earthquakes generated from inland dislocations (e.g., 29 Apr. 1974, 12 Oct. 1992, and 28 Dec. 1999) and the African continental margin (e.g., 12 Sept. 1955 and 28 May 1998). Therefore, this study estimates the earthquake ground shaking and the consequent impacts in Alexandria on the basis of two earthquake scenarios. The simulation results show that Alexandria affected by both earthquakes scenarios relatively in the same manner despite the number of casualties during the first scenario (inland dislocation) is twice larger than the second one (African continental margin). An expected percentage of 2.27 from Alexandria's total constructions (12.9 millions, 2006 Census) will be affected, 0.19 % injuries and 0.01 % deaths of the total population (4.1 millions, 2006 Census) estimated by running the first scenario. The earthquake risk profile reveals that three districts (Al-Montazah, Al-Amriya, and Shark) lie in high seismic risks, two districts (Gharb and Wasat) are in moderate, and two districts (Al-Gomrok and Burg El-Arab) are in low seismic risk level. Moreover, the building damage estimations reflect that Al-Montazah is the highest vulnerable district whereas 73 % of expected damages were reported there. The undertaken analysis shows that the Alexandria urban area faces high risk. Informal areas and deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated earthquake risks (buildings damages) are concentrated at the most densely populated (Al-Montazah, Al-Amriya, and Shark) districts. Moreover, about 75 % of casualties are in the same districts.

  8. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    Science.gov (United States)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  9. Cumulative effects in Swedish EIA practice - difficulties and obstacles

    International Nuclear Information System (INIS)

    Waernbaeck, Antoienette; Hilding-Rydevik, Tuija

    2009-01-01

    The importance of considering cumulative effects (CE) in the context of environmental assessment is manifested in the EU regulations. The demands on the contents of Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) documents explicitly ask for CE to be described. In Swedish environmental assessment documents CE are rarely described or included. The aim of this paper is to look into the reasons behind this fact in the Swedish context. The paper describes and analyse how actors implementing the EIA and SEA legislation in Sweden perceive the current situation in relation to the legislative demands and the inclusion of cumulative effects. Through semi-structured interviews the following questions have been explored: Is the phenomenon of CE discussed and included in the EIA/SEA process? What do the actors include in and what is their knowledge of the term and concept of CE? Which difficulties and obstacles do these actors experience and what possibilities for inclusion of CE do they see in the EIA/SEA process? A large number of obstacles and hindrances emerged from the interviews conducted. It can be concluded from the analysis that the will to act does seem to exist. A lack of knowledge in respect of how to include cumulative effects and a lack of clear regulations concerning how this should be done seem to be perceived as the main obstacles. The knowledge of the term and the phenomenon is furthermore quite narrow and not all encompassing. They experience that there is a lack of procedures in place. They also seem to lack knowledge of methods in relation to how to actually work, in practice, with CE and how to include CE in the EIA/SEA process. It can be stated that the existence of this poor picture in relation to practice concerning CE in the context of impact assessment mirrors the existing and so far rather vague demands in respect of the inclusion and assessment of CE in Swedish EIA and SEA legislation, regulations, guidelines and

  10. Technical Note: SCUDA: A software platform for cumulative dose assessment

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seyoun; McNutt, Todd; Quon, Harry; Wong, John; Lee, Junghoon, E-mail: rshekhar@childrensnational.org, E-mail: junghoon@jhu.edu [Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, Maryland 21231 (United States); Plishker, William [IGI Technologies, Inc., College Park, Maryland 20742 (United States); Shekhar, Raj, E-mail: rshekhar@childrensnational.org, E-mail: junghoon@jhu.edu [IGI Technologies, Inc., College Park, Maryland 20742 and Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington, DC 20010 (United States)

    2016-10-15

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides

  11. Experimental study of structural response to earthquakes

    International Nuclear Information System (INIS)

    Clough, R.W.; Bertero, V.V.; Bouwkamp, J.G.; Popov, E.P.

    1975-01-01

    The objectives, methods, and some of the principal results obtained from experimental studies of the behavior of structures subjected to earthquakes are described. Although such investigations are being conducted in many laboratories throughout the world, the information presented deals specifically with projects being carried out at the Earthquake Engineering Research Center (EERC) of the University of California, Berkeley. A primary purpose of these investigations is to obtain detailed information on the inelastic response mechanisms in typical structural systems so that the experimentally observed performance can be compared with computer generated analytical predictions. Only by such comparisons can the mathematical models used in dynamic nonlinear analyses be verified and improved. Two experimental procedures for investigating earthquake structural response are discussed: the earthquake simulator facility which subjects the base of the test structure to acceleration histories similar to those recorded in actual earthquakes, and systems of hydraulic rams which impose specified displacement histories on the test components, equivalent to motions developed in structures subjected to actual'quakes. The general concept and performance of the 20ft square EERC earthquake simulator is described, and the testing of a two story concrete frame building is outlined. Correlation of the experimental results with analytical predictions demonstrates that satisfactory agreement can be obtained only if the mathematical model incorporates a stiffness deterioration mechanism which simulates the cracking and other damage suffered by the structure

  12. Studies of the subsurface effects of earthquakes

    International Nuclear Information System (INIS)

    Marine, I.W.

    1980-01-01

    As part of the National Terminal Waste Storage Program, the Savannah River Laboratory is conducting a series of studies on the subsurface effects of earthquakes. This report summarizes three subcontracted studies. (1) Earthquake damage to underground facilities: the purpose of this study was to document damage and nondamage caused by earthquakes to tunnels and shallow underground openings; to mines and other deep openings; and to wells, shafts, and other vertical facilities. (2) Earthquake related displacement fields near underground facilities: the study included an analysis of block motion, an analysis of the dependence of displacement on the orientation and distance of joints from the earthquake source, and displacement related to distance and depth near a causative fault as a result of various shapes, depths, and senses of movement on the causative fault. (3) Numerical simulation of earthquake effects on tunnels for generic nuclear waste repositories: the objective of this study was to use numerical modeling to determine under what conditions seismic waves might cause instability of an underground opening or create fracturing that would increase the permeability of the rock mass

  13. Relationship of heat and cold to earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.

    1980-06-26

    An analysis of 54 earthquakes of magnitude 7 and above, including 13 of magnitude 8 and above, between 780 BC and the present, shows that the vast majority of them fell in the four major cool periods during this time span, or on the boundaries of these periods. Between 1800 and 1876, four periods of earthquake activity in China can be recognized, and these tend to correspond to relatively cold periods over that time span. An analysis of earthquakes of magnitude 6 or above over the period 1951 to 1965 gives the following results: earthquakes in north and southwest China tended to occur when the preceding year had an above-average annual temperature and winter temperature; in the northeast they tended to occur in a year after a year with an above-average winter temperature; in the northwest there was also a connection with a preceding warm winter, but to a less pronounced degree. The few earthquakes in South China seemed to follow cold winters. Both the Tangshan and Yongshan Pass earthquakes were preceded by unusually warm years and relatively high winter temperatures.

  14. Critical behavior in earthquake energy dissipation

    Science.gov (United States)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  15. Earthquakes trigger the loss of groundwater biodiversity

    Science.gov (United States)

    Galassi, Diana M. P.; Lombardo, Paola; Fiasca, Barbara; di Cioccio, Alessia; di Lorenzo, Tiziana; Petitta, Marco; di Carlo, Piero

    2014-09-01

    Earthquakes are among the most destructive natural events. The 6 April 2009, 6.3-Mw earthquake in L'Aquila (Italy) markedly altered the karstic Gran Sasso Aquifer (GSA) hydrogeology and geochemistry. The GSA groundwater invertebrate community is mainly comprised of small-bodied, colourless, blind microcrustaceans. We compared abiotic and biotic data from two pre-earthquake and one post-earthquake complete but non-contiguous hydrological years to investigate the effects of the 2009 earthquake on the dominant copepod component of the obligate groundwater fauna. Our results suggest that the massive earthquake-induced aquifer strain biotriggered a flushing of groundwater fauna, with a dramatic decrease in subterranean species abundance. Population turnover rates appeared to have crashed, no longer replenishing the long-standing communities from aquifer fractures, and the aquifer became almost totally deprived of animal life. Groundwater communities are notorious for their low resilience. Therefore, any major disturbance that negatively impacts survival or reproduction may lead to local extinction of species, most of them being the only survivors of phylogenetic lineages extinct at the Earth surface. Given the ecological key role played by the subterranean fauna as decomposers of organic matter and ``ecosystem engineers'', we urge more detailed, long-term studies on the effect of major disturbances to groundwater ecosystems.

  16. Megathrust earthquakes in Central Chile: What is next after the Maule 2010 earthquake?

    Science.gov (United States)

    Madariaga, R.

    2013-05-01

    The 27 February 2010 Maule earthquake occurred in a well identified gap in the Chilean subduction zone. The event has now been studied in detail using both far-field, near field seismic and geodetic data, we will review this information gathered so far. The event broke a region that was much longer along strike than the gap left over from the 1835 Concepcion earthquake, sometimes called the Darwin earthquake because he was in the area when the earthquake occurred and made many observations. Recent studies of contemporary documents by Udias et al indicate that the area broken by the Maule earthquake in 2010 had previously broken by a similar earthquake in 1751, but several events in the magnitude 8 range occurred in the area principally in 1835 already mentioned and, more recently on 1 December 1928 to the North and on 21 May 1960 (1 1/2 days before the big Chilean earthquake of 1960). Currently the area of the 2010 earthquake and the region immediately to the North is undergoing a very large increase in seismicity with numerous clusters of seismicity that move along the plate interface. Examination of the seismicity of Chile of the 18th and 19th century show that the region immediately to the North of the 2010 earthquake broke in a very large megathrust event in July 1730. this is the largest known earthquake in central Chile. The region where this event occurred has broken in many occasions with M 8 range earthquakes in 1822, 1880, 1906, 1971 and 1985. Is it preparing for a new very large megathrust event? The 1906 earthquake of Mw 8.3 filled the central part of the gap but it has broken again on several occasions in 1971, 1973 and 1985. The main question is whether the 1906 earthquake relieved enough stresses from the 1730 rupture zone. Geodetic data shows that most of the region that broke in 1730 is currently almost fully locked from the northern end of the Maule earthquake at 34.5°S to 30°S, near the southern end of the of the Mw 8.5 Atacama earthquake of 11

  17. Retrospective stress-forecasting of earthquakes

    Science.gov (United States)

    Gao, Yuan; Crampin, Stuart

    2015-04-01

    Observations of changes in azimuthally varying shear-wave splitting (SWS) above swarms of small earthquakes monitor stress-induced changes to the stress-aligned vertical microcracks pervading the upper crust, lower crust, and uppermost ~400km of the mantle. (The microcracks are intergranular films of hydrolysed melt in the mantle.) Earthquakes release stress, and an appropriate amount of stress for the relevant magnitude must accumulate before each event. Iceland is on an extension of the Mid-Atlantic Ridge, where two transform zones, uniquely run onshore. These onshore transform zones provide semi-continuous swarms of small earthquakes, which are the only place worldwide where SWS can be routinely monitored. Elsewhere SWS must be monitored above temporally-active occasional swarms of small earthquakes, or in infrequent SKS and other teleseismic reflections from the mantle. Observations of changes in SWS time-delays are attributed to stress-induced changes in crack aspect-ratios allowing stress-accumulation and stress-relaxation to be identified. Monitoring SWS in SW Iceland in 1988, stress-accumulation before an impending earthquake was recognised and emails were exchanged between the University of Edinburgh (EU) and the Iceland Meteorological Office (IMO). On 10th November 1988, EU emailed IMO that a M5 earthquake could occur soon on a seismically-active fault plane where seismicity was still continuing following a M5.1 earthquake six-months earlier. Three-days later, IMO emailed EU that a M5 earthquake had just occurred on the specified fault-plane. We suggest this is a successful earthquake stress-forecast, where we refer to the procedure as stress-forecasting earthquakes as opposed to predicting or forecasting to emphasise the different formalism. Lack of funds has prevented us monitoring SWS on Iceland seismograms, however, we have identified similar characteristic behaviour of SWS time-delays above swarms of small earthquakes which have enabled us to

  18. Cumulative trauma and symptom complexity in children: a path analysis.

    Science.gov (United States)

    Hodges, Monica; Godbout, Natacha; Briere, John; Lanktree, Cheryl; Gilbert, Alicia; Kletzka, Nicole Taylor

    2013-11-01

    Multiple trauma exposures during childhood are associated with a range of psychological symptoms later in life. In this study, we examined whether the total number of different types of trauma experienced by children (cumulative trauma) is associated with the complexity of their subsequent symptomatology, where complexity is defined as the number of different symptom clusters simultaneously elevated into the clinical range. Children's symptoms in six different trauma-related areas (e.g., depression, anger, posttraumatic stress) were reported both by child clients and their caretakers in a clinical sample of 318 children. Path analysis revealed that accumulated exposure to multiple different trauma types predicts symptom complexity as reported by both children and their caretakers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Near-Field Source Localization Using a Special Cumulant Matrix

    Science.gov (United States)

    Cui, Han; Wei, Gang

    A new near-field source localization algorithm based on a uniform linear array was proposed. The proposed algorithm estimates each parameter separately but does not need pairing parameters. It can be divided into two important steps. The first step is bearing-related electric angle estimation based on the ESPRIT algorithm by constructing a special cumulant matrix. The second step is the other electric angle estimation based on the 1-D MUSIC spectrum. It offers much lower computational complexity than the traditional near-field 2-D MUSIC algorithm and has better performance than the high-order ESPRIT algorithm. Simulation results demonstrate that the performance of the proposed algorithm is close to the Cramer-Rao Bound (CRB).

  20. Cumulative growth of minor hysteresis loops in the Kolmogorov model

    International Nuclear Information System (INIS)

    Meilikhov, E. Z.; Farzetdinova, R. M.

    2013-01-01

    The phenomenon of nonrepeatability of successive remagnetization cycles in Co/M (M = Pt, Pd, Au) multilayer film structures is explained in the framework of the Kolmogorov crystallization model. It is shown that this model of phase transitions can be adapted so as to adequately describe the process of magnetic relaxation in the indicated systems with “memory.” For this purpose, it is necessary to introduce some additional elements into the model, in particular, (i) to take into account the fact that every cycle starts from a state “inherited” from the preceding cycle and (ii) to assume that the rate of growth of a new magnetic phase depends on the cycle number. This modified model provides a quite satisfactory qualitative and quantitative description of all features of successive magnetic relaxation cycles in the system under consideration, including the surprising phenomenon of cumulative growth of minor hysteresis loops.

  1. Cumulative protons in 12C fragmentation at intermediate energy

    International Nuclear Information System (INIS)

    Abramov, B.M.; Alekseev, P.N.; Borodin, Y.A.; Bulychjov, S.A.; Dukhovskoi, I.A.; Khanov, A.I.; Krutenkova, A.P.; Kulikov, V.V.; Martemianov, M.A.; Matsuk, M.A.; Turdakina, E.N.

    2014-01-01

    In the FRAGM experiment at heavy ion accelerator complex TWAC-ITEP, the proton yields at an angle 3.5 degrees have been measured in fragmentation of carbon ions at T 0 equals 0.3, 0.6, 0.95 and 2.0 GeV/nucleon on beryllium target. The data are presented as invariant proton yields on cumulative variable x in the range 0.9 < x < 2.4. Proton spectra cover six orders of invariant cross section magnitude. They have been analyzed in the framework of quark cluster fragmentation model. Fragmentation functions of quark- gluon string model are used. The probabilities of the existence of multi-quark clusters in carbon nuclei are estimated to be 8 - 12% for six-quark clusters and 0.2 - 0.6% for nine- quark clusters. (authors)

  2. Ratcheting up the ratchet: on the evolution of cumulative culture.

    Science.gov (United States)

    Tennie, Claudio; Call, Josep; Tomasello, Michael

    2009-08-27

    Some researchers have claimed that chimpanzee and human culture rest on homologous cognitive and learning mechanisms. While clearly there are some homologous mechanisms, we argue here that there are some different mechanisms at work as well. Chimpanzee cultural traditions represent behavioural biases of different populations, all within the species' existing cognitive repertoire (what we call the 'zone of latent solutions') that are generated by founder effects, individual learning and mostly product-oriented (rather than process-oriented) copying. Human culture, in contrast, has the distinctive characteristic that it accumulates modifications over time (what we call the 'ratchet effect'). This difference results from the facts that (i) human social learning is more oriented towards process than product and (ii) unique forms of human cooperation lead to active teaching, social motivations for conformity and normative sanctions against non-conformity. Together, these unique processes of social learning and cooperation lead to humans' unique form of cumulative cultural evolution.

  3. EXPERIMENTAL VALIDATION OF CUMULATIVE SURFACE LOCATION ERROR FOR TURNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Adam K. Kiss

    2016-02-01

    Full Text Available The aim of this study is to create a mechanical model which is suitable to investigate the surface quality in turning processes, based on the Cumulative Surface Location Error (CSLE, which describes the series of the consecutive Surface Location Errors (SLE in roughing operations. In the established model, the investigated CSLE depends on the currently and the previously resulted SLE by means of the variation of the width of cut. The phenomenon of the system can be described as an implicit discrete map. The stationary Surface Location Error and its bifurcations were analysed and flip-type bifurcation was observed for CSLE. Experimental verification of the theoretical results was carried out.

  4. Ratcheting up the ratchet: on the evolution of cumulative culture

    Science.gov (United States)

    Tennie, Claudio; Call, Josep; Tomasello, Michael

    2009-01-01

    Some researchers have claimed that chimpanzee and human culture rest on homologous cognitive and learning mechanisms. While clearly there are some homologous mechanisms, we argue here that there are some different mechanisms at work as well. Chimpanzee cultural traditions represent behavioural biases of different populations, all within the species’ existing cognitive repertoire (what we call the ‘zone of latent solutions’) that are generated by founder effects, individual learning and mostly product-oriented (rather than process-oriented) copying. Human culture, in contrast, has the distinctive characteristic that it accumulates modifications over time (what we call the ‘ratchet effect’). This difference results from the facts that (i) human social learning is more oriented towards process than product and (ii) unique forms of human cooperation lead to active teaching, social motivations for conformity and normative sanctions against non-conformity. Together, these unique processes of social learning and cooperation lead to humans’ unique form of cumulative cultural evolution. PMID:19620111

  5. Cumulative neutrino background from quasar-driven outflows

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiawei; Loeb, Abraham, E-mail: xiawei.wang@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu [Department of Astronomy, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-12-01

    Quasar-driven outflows naturally account for the missing component of the extragalactic γ-ray background through neutral pion production in interactions between protons accelerated by the forward outflow shock and interstellar protons. We study the simultaneous neutrino emission by the same protons. We adopt outflow parameters that best fit the extragalactic γ-ray background data and derive a cumulative neutrino background of ∼ 10{sup −7} GeV cm{sup −2} s{sup −1} sr{sup −1} at neutrino energies E {sub ν} ∼> 10 TeV, which naturally explains the most recent IceCube data without tuning any free parameters. The link between the γ-ray and neutrino emission from quasar outflows can be used to constrain the high-energy physics of strong shocks at cosmological distances.

  6. Using Fuzzy Probability Weights in Cumulative Prospect Theory

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2016-12-01

    Full Text Available During the past years, a rapid growth has been seen in the descriptive approaches to decision choice. As opposed to normative expected utility theory, these approaches are based on the subjective perception of probabilities by the individuals, which takes place in real situations of risky choice. The modelling of this kind of perceptions is made on the basis of probability weighting functions. In cumulative prospect theory, which is the focus of this paper, decision prospect outcome weights are calculated using the obtained probability weights. If the value functions are constructed in the sets of positive and negative outcomes, then, based on the outcome value evaluations and outcome decision weights, generalised evaluations of prospect value are calculated, which are the basis for choosing an optimal prospect.

  7. Modelling the evolution and diversity of cumulative culture

    Science.gov (United States)

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  8. Optimal execution with price impact under Cumulative Prospect Theory

    Science.gov (United States)

    Zhao, Jingdong; Zhu, Hongliang; Li, Xindan

    2018-01-01

    Optimal execution of a stock (or portfolio) has been widely studied in academia and in practice over the past decade, and minimizing transaction costs is a critical point. However, few researchers consider the psychological factors for the traders. What are traders truly concerned with - buying low in the paper accounts or buying lower compared to others? We consider the optimal trading strategies in terms of the price impact and Cumulative Prospect Theory and identify some specific properties. Our analyses indicate that a large proportion of the execution volume is distributed at both ends of the transaction time. But the trader's optimal strategies may not be implemented at the same transaction size and speed in different market environments.

  9. Practical management of cumulative anthropogenic impacts with working marine examples

    DEFF Research Database (Denmark)

    Kyhn, Line Anker; Wright, Andrew J.

    2014-01-01

    for petroleum. Human disturbances, including the noise almost ubiquitously associated with human activity, are likely to increase the incidence, magnitude, and duration of adverse effects on marine life, including stress responses. Stress responses have the potential to induce fitness consequences...... on impact can be facilitated through implementation of regular application cycles for project authorization or improved programmatic and aggregated impact assessments that simultaneously consider multiple projects. Cross-company collaborations and a better incorporation of uncertainty into decision making...... could also help limit, if not reduce, cumulative impacts of multiple human activities. These simple management steps may also form the basis of a rudimentary form of marine spatial planning and could be used in support of future ecosystem-based management efforts....

  10. Practical management of cumulative anthropogenic impacts with working marine examples.

    Science.gov (United States)

    Wright, Andrew J; Kyhn, Line A

    2015-04-01

    Human pressure on the environment is expanding and intensifying, especially in coastal and offshore areas. Major contributors to this are the current push for offshore renewable energy sources, which are thought of as environmentally friendly sources of power, as well as the continued demand for petroleum. Human disturbances, including the noise almost ubiquitously associated with human activity, are likely to increase the incidence, magnitude, and duration of adverse effects on marine life, including stress responses. Stress responses have the potential to induce fitness consequences for individuals, which add to more obvious directed takes (e.g., hunting or fishing) to increase the overall population-level impact. To meet the requirements of marine spatial planning and ecosystem-based management, many efforts are ongoing to quantify the cumulative impacts of all human actions on marine species or populations. Meanwhile, regulators face the challenge of managing these accumulating and interacting impacts with limited scientific guidance. We believe there is scientific support for capping the level of impact for (at a minimum) populations in decline or with unknown statuses. This cap on impact can be facilitated through implementation of regular application cycles for project authorization or improved programmatic and aggregated impact assessments that simultaneously consider multiple projects. Cross-company collaborations and a better incorporation of uncertainty into decision making could also help limit, if not reduce, cumulative impacts of multiple human activities. These simple management steps may also form the basis of a rudimentary form of marine spatial planning and could be used in support of future ecosystem-based management efforts. © 2014 Society for Conservation Biology.

  11. County-level cumulative environmental quality associated with cancer incidence.

    Science.gov (United States)

    Jagai, Jyotsna S; Messer, Lynne C; Rappazzo, Kristen M; Gray, Christine L; Grabich, Shannon C; Lobdell, Danelle T

    2017-08-01

    Individual environmental exposures are associated with cancer development; however, environmental exposures occur simultaneously. The Environmental Quality Index (EQI) is a county-level measure of cumulative environmental exposures that occur in 5 domains. The EQI was linked to county-level annual age-adjusted cancer incidence rates from the Surveillance, Epidemiology, and End Results (SEER) Program state cancer profiles. All-site cancer and the top 3 site-specific cancers for male and female subjects were considered. Incident rate differences (IRDs; annual rate difference per 100,000 persons) and 95% confidence intervals (CIs) were estimated using fixed-slope, random intercept multilevel linear regression models. Associations were assessed with domain-specific indices and analyses were stratified by rural/urban status. Comparing the highest quintile/poorest environmental quality with the lowest quintile/best environmental quality for overall EQI, all-site county-level cancer incidence rate was positively associated with poor environmental quality overall (IRD, 38.55; 95% CI, 29.57-47.53) and for male (IRD, 32.60; 95% CI, 16.28-48.91) and female (IRD, 30.34; 95% CI, 20.47-40.21) subjects, indicating a potential increase in cancer incidence with decreasing environmental quality. Rural/urban stratified models demonstrated positive associations comparing the highest with the lowest quintiles for all strata, except the thinly populated/rural stratum and in the metropolitan/urbanized stratum. Prostate and breast cancer demonstrated the strongest positive associations with poor environmental quality. We observed strong positive associations between the EQI and all-site cancer incidence rates, and associations differed by rural/urban status and environmental domain. Research focusing on single environmental exposures in cancer development may not address the broader environmental context in which cancers develop, and future research should address cumulative environmental

  12. Economic and policy implications of the cumulative carbon budget

    Science.gov (United States)

    Allen, M. R.; Otto, F. E. L.; Otto, A.; Hepburn, C.

    2014-12-01

    The importance of cumulative carbon emissions in determining long-term risks of climate change presents considerable challenges to policy makers. The traditional notion of "total CO2-equivalent emissions", which forms the backbone of agreements such as the Kyoto Protocol and the European Emissions Trading System, is fundamentally flawed. Measures to reduce short-lived climate pollutants benefit the current generation, while measures to reduce long-lived climate pollutants benefit future generations, so there is no sense in which they can ever be considered equivalent. Debates over the correct metric used to compute CO2-equivalence are thus entirely moot: both long-lived and short-lived emissions will need to be addressed if all generations are to be protected from dangerous climate change. As far as long-lived climate pollutants are concerned, the latest IPCC report highlights the overwhelming importance of carbon capture and storage in determining the cost of meeting the goal of limiting anthropogenic warming to two degrees. We will show that this importance arises directly from the cumulative carbon budget and the role of CCS as the technology of last resort before economic activity needs to be restricted to meet ambitious climate targets. It highlights the need to increase the rate of CCS deployment by orders of magnitude if the option of avoiding two degrees is to be retained. The difficulty of achieving this speed of deployment through conventional incentives and carbon-pricing mechanisms suggests a need for a much more direct mandatory approach. Despite their theoretical economic inefficiency, the success of recent regulatory measures in achieving greenhouse gas emissions reductions in jurisdictions such as the United States suggests an extension of the regulatory approach could be a more effective and politically acceptable means of achieving adequately rapid CCS deployment than conventional carbon taxes or cap-and-trade systems.

  13. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    Science.gov (United States)

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  14. Two critical tests for the Critical Point earthquake

    Science.gov (United States)

    Tzanis, A.; Vallianatos, F.

    2003-04-01

    It has been credibly argued that the earthquake generation process is a critical phenomenon culminating with a large event that corresponds to some critical point. In this view, a great earthquake represents the end of a cycle on its associated fault network and the beginning of a new one. The dynamic organization of the fault network evolves as the cycle progresses and a great earthquake becomes more probable, thereby rendering possible the prediction of the cycle’s end by monitoring the approach of the fault network toward a critical state. This process may be described by a power-law time-to-failure scaling of the cumulative seismic release rate. Observational evidence has confirmed the power-law scaling in many cases and has empirically determined that the critical exponent in the power law is typically of the order n=0.3. There are also two theoretical predictions for the value of the critical exponent. Ben-Zion and Lyakhovsky (Pure appl. geophys., 159, 2385-2412, 2002) give n=1/3. Rundle et al. (Pure appl. geophys., 157, 2165-2182, 2000) show that the power-law activation associated with a spinodal instability is essentially identical to the power-law acceleration of Benioff strain observed prior to earthquakes; in this case n=0.25. More recently, the CP model has gained support from the development of more dependable models of regional seismicity with realistic fault geometry that show accelerating seismicity before large events. Essentially, these models involve stress transfer to the fault network during the cycle such, that the region of accelerating seismicity will scale with the size of the culminating event, as for instance in Bowman and King (Geophys. Res. Let., 38, 4039-4042, 2001). It is thus possible to understand the observed characteristics of distributed accelerating seismicity in terms of a simple process of increasing tectonic stress in a region already subjected to stress inhomogeneities at all scale lengths. Then, the region of

  15. Radon anomalies prior to earthquakes (2). Atmospheric radon anomaly observed before the Hyogoken-Nanbu earthquake

    International Nuclear Information System (INIS)

    Ishikawa, Tetsuo; Tokonami, Shinji; Yasuoka, Yumi; Shinogi, Masaki; Nagahama, Hiroyuki; Omori, Yasutaka; Kawada, Yusuke

    2008-01-01

    Before the 1995 Hyogoken-Nanbu earthquake, various geochemical precursors were observed in the aftershock area: chloride ion concentration, groundwater discharge rate, groundwater radon concentration and so on. Kobe Pharmaceutical University (KPU) is located about 25 km northeast from the epicenter and within the aftershock area. Atmospheric radon concentration had been continuously measured from 1984 at KPU, using a flow-type ionization chamber. The radon concentration data were analyzed using the smoothed residual values which represent the daily minimum of radon concentration with the exclusion of normalized seasonal variation. The radon concentration (smoothed residual values) demonstrated an upward trend about two months before the Hyogoken-Nanbu earthquake. The trend can be well fitted to a log-periodic model related to earthquake fault dynamics. As a result of model fitting, a critical point was calculated to be between 13 and 27 January 1995, which was in good agreement with the occurrence date of earthquake (17 January 1995). The mechanism of radon anomaly before earthquakes is not fully understood. However, it might be possible to detect atmospheric radon anomaly as a precursor before a large earthquake, if (1) the measurement is conducted near the earthquake fault, (2) the monitoring station is located on granite (radon-rich) areas, and (3) the measurement is conducted for more than several years before the earthquake to obtain background data. (author)

  16. Protecting your family from earthquakes: The seven steps to earthquake safety

    Science.gov (United States)

    Developed by American Red Cross, Asian Pacific Fund

    2007-01-01

    This book is provided here because of the importance of preparing for earthquakes before they happen. Experts say it is very likely there will be a damaging San Francisco Bay Area earthquake in the next 30 years and that it will strike without warning. It may be hard to find the supplies and services we need after this earthquake. For example, hospitals may have more patients than they can treat, and grocery stores may be closed for weeks. You will need to provide for your family until help arrives. To keep our loved ones and our community safe, we must prepare now. Some of us come from places where earthquakes are also common. However, the dangers of earthquakes in our homelands may be very different than in the Bay Area. For example, many people in Asian countries die in major earthquakes when buildings collapse or from big sea waves called tsunami. In the Bay Area, the main danger is from objects inside buildings falling on people. Take action now to make sure your family will be safe in an earthquake. The first step is to read this book carefully and follow its advice. By making your home safer, you help make our community safer. Preparing for earthquakes is important, and together we can make sure our families and community are ready. English version p. 3-13 Chinese version p. 14-24 Vietnamese version p. 25-36 Korean version p. 37-48

  17. Soil structure interactions of eastern U.S. type earthquakes

    International Nuclear Information System (INIS)

    Chang Chen; Serhan, S.

    1991-01-01

    Two types of earthquakes have occurred in the eastern US in the past. One of them was the infrequent major events such as the 1811-1812 New Madrid Earthquakes, or the 1886 Charleston Earthquake. The other type was the frequent shallow earthquakes with high frequency, short duration and high accelerations. Two eastern US nuclear power plants, V.C Summer and Perry, went through extensive licensing effort to obtain fuel load licenses after this type of earthquake was recorded on sites and exceeded the design bases beyond 10 hertz region. This paper discusses the soil-structure interactions of the latter type of earthquakes

  18. Earthquakes - a danger to deep-lying repositories?

    International Nuclear Information System (INIS)

    2012-03-01

    This booklet issued by the Swiss National Cooperative for the Disposal of Radioactive Waste NAGRA takes a look at geological factors concerning earthquakes and the safety of deep-lying repositories for nuclear waste. The geological processes involved in the occurrence of earthquakes are briefly looked at and the definitions for magnitude and intensity of earthquakes are discussed. Examples of damage caused by earthquakes are given. The earthquake situation in Switzerland is looked at and the effects of earthquakes on sub-surface structures and deep-lying repositories are discussed. Finally, the ideas proposed for deep-lying geological repositories for nuclear wastes are discussed

  19. The earthquake problem in engineering design: generating earthquake design basis information

    International Nuclear Information System (INIS)

    Sharma, R.D.

    1987-01-01

    Designing earthquake resistant structures requires certain design inputs specific to the seismotectonic status of the region, in which a critical facility is to be located. Generating these inputs requires collection of earthquake related information using present day techniques in seismology and geology, and processing the collected information to integrate it to arrive at a consolidated picture of the seismotectonics of the region. The earthquake problem in engineering design has been outlined in the context of a seismic design of nuclear power plants vis a vis current state of the art techniques. The extent to which the accepted procedures of assessing seismic risk in the region and generating the design inputs have been adherred to determine to a great extent the safety of the structures against future earthquakes. The document is a step towards developing an aproach for generating these inputs, which form the earthquake design basis. (author)

  20. Consideration for standard earthquake vibration (1). The Niigataken Chuetsu-oki Earthquake in 2007

    International Nuclear Information System (INIS)

    Ishibashi, Katsuhiko

    2007-01-01

    Outline of new guideline of quakeproof design standard of nuclear power plant and the standard earthquake vibration are explained. The improvement points of new guideline are discussed on the basis of Kashiwazaki-Kariwa Nuclear Power Plant incidents. The fundamental limits of new guideline are pointed. Placement of the quakeproof design standard of nuclear power plant, JEAG4601 of Japan Electric Association, new guideline, standard earthquake vibration of new guideline, the Niigataken Chuetsu-oki Earthquake in 2007 and damage of Kashiwazaki-Kariwa Nuclear Power Plant are discussed. The safety criteria of safety review system, organization, standard and guideline should be improved on the basis of this earthquake and nuclear plant accident. The general knowledge, 'a nuclear power plant is not constructed in the area expected large earthquake', has to be realized. Preconditions of all nuclear power plants should not cause damage to anything. (S.Y.)

  1. Earthquake response of inelastic structures

    International Nuclear Information System (INIS)

    Parulekar, Y.M.; Vaity, K.N.; Reddy, .R.; Vaze, K.K.; Kushwaha, H.S.

    2004-01-01

    The most commonly used method in the seismic analysis of structures is the response spectrum method. For seismic re-evaluation of existing facilities elastic response spectrum method cannot be used directly as large deformation above yield may be observed under Safe Shutdown Earthquake (SSE). The plastic deformation, i.e. hysteretic characteristics of various elements of the structure cause dissipation of energy. Hence the values of damping given by the code, which does not account hysteretic energy dissipation cannot be directly used. In this paper, appropriate damping values are evaluated for 5-storey, 10-storey and 15-storey shear beam structures, which deform beyond their yield limit. Linear elastic analysis is performed for the same structures using these damping values and the storey forces are compared with those obtained using inelastic time history analysis. A damping model, which relates ductility of the structure and damping, is developed. Using his damping model, a practical structure is analysed and results are compared with inelastic time history analysis and the comparison is found to be good

  2. Crowd-Sourced Global Earthquake Early Warning

    Science.gov (United States)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  3. Using remote sensing to predict earthquake impacts

    Science.gov (United States)

    Fylaktos, Asimakis; Yfantidou, Anastasia

    2017-09-01

    Natural hazards like earthquakes can result to enormous property damage, and human casualties in mountainous areas. Italy has always been exposed to numerous earthquakes, mostly concentrated in central and southern regions. Last year, two seismic events near Norcia (central Italy) have occurred, which led to substantial loss of life and extensive damage to properties, infrastructure and cultural heritage. This research utilizes remote sensing products and GIS software, to provide a database of information. We used both SAR images of Sentinel 1A and optical imagery of Landsat 8 to examine the differences of topography with the aid of the multi temporal monitoring technique. This technique suits for the observation of any surface deformation. This database is a cluster of information regarding the consequences of the earthquakes in groups, such as property and infrastructure damage, regional rifts, cultivation loss, landslides and surface deformations amongst others, all mapped on GIS software. Relevant organizations can implement these data in order to calculate the financial impact of these types of earthquakes. In the future, we can enrich this database including more regions and enhance the variety of its applications. For instance, we could predict the future impacts of any type of earthquake in several areas, and design a preliminarily model of emergency for immediate evacuation and quick recovery response. It is important to know how the surface moves, in particular geographical regions like Italy, Cyprus and Greece, where earthquakes are so frequent. We are not able to predict earthquakes, but using data from this research, we may assess the damage that could be caused in the future.

  4. Fractals and Forecasting in Earthquakes and Finance

    Science.gov (United States)

    Rundle, J. B.; Holliday, J. R.; Turcotte, D. L.

    2011-12-01

    It is now recognized that Benoit Mandelbrot's fractals play a critical role in describing a vast range of physical and social phenomena. Here we focus on two systems, earthquakes and finance. Since 1942, earthquakes have been characterized by the Gutenberg-Richter magnitude-frequency relation, which in more recent times is often written as a moment-frequency power law. A similar relation can be shown to hold for financial markets. Moreover, a recent New York Times article, titled "A Richter Scale for the Markets" [1] summarized the emerging viewpoint that stock market crashes can be described with similar ideas as large and great earthquakes. The idea that stock market crashes can be related in any way to earthquake phenomena has its roots in Mandelbrot's 1963 work on speculative prices in commodities markets such as cotton [2]. He pointed out that Gaussian statistics did not account for the excessive number of booms and busts that characterize such markets. Here we show that both earthquakes and financial crashes can both be described by a common Landau-Ginzburg-type free energy model, involving the presence of a classical limit of stability, or spinodal. These metastable systems are characterized by fractal statistics near the spinodal. For earthquakes, the independent ("order") parameter is the slip deficit along a fault, whereas for the financial markets, it is financial leverage in place. For financial markets, asset values play the role of a free energy. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In the case of financial models, the probabilities are closely related to implied volatility, an important component of Black-Scholes models for stock valuations. [2] B. Mandelbrot, The variation of certain speculative prices, J. Business, 36, 294 (1963)

  5. Earthquake engineering development before and after the March 4, 1977, Vrancea, Romania earthquake

    International Nuclear Information System (INIS)

    Georgescu, E.-S.

    2002-01-01

    At 25 years since the of the Vrancea earthquake of March, 4th 1977, we can analyze in an open and critical way its impact on the evolution of earthquake engineering codes and protection policies in Romania. The earthquake (M G-R = 7.2; M w = 7.5), produced 1,570 casualties and more than 11,300 injured persons (90% of the victims in Bucharest), seismic losses were estimated at more then USD 2 billions. The 1977 earthquake represented a significant episode of XXth century in seismic zones of Romania and neighboring countries. The INCERC seismic record of March 4, 1977 put, for the first time, in evidence the spectral content of long period seismic motions of Vrancea earthquakes, the duration, the number of cycles and values of actual accelerations, with important effects of overloading upon flexible structures. The seismic coefficients k s , the spectral curve (the dynamic coefficient β r ) and the seismic zonation map, the requirements in the antiseismic design norms were drastically, changed while the microzonation maps of the time ceased to be used, and the specific Vrancea earthquake recurrence was reconsidered based on hazard studies Thus, the paper emphasises: - the existing engineering knowledge, earthquake code and zoning maps requirements until 1977 as well as seismology and structural lessons since 1977; - recent aspects of implementing of the Earthquake Code P.100/1992 and harmonization with Eurocodes, in conjunction with the specific of urban and rural seismic risk and enforcing policies on strengthening of existing buildings; - a strategic view of disaster prevention, using earthquake scenarios and loss assessments, insurance, earthquake education and training; - the need of a closer transfer of knowledge between seismologists, engineers and officials in charge with disaster prevention public policies. (author)

  6. Links Between Earthquake Characteristics and Subducting Plate Heterogeneity in the 2016 Pedernales Ecuador Earthquake Rupture Zone

    Science.gov (United States)

    Bai, L.; Mori, J. J.

    2016-12-01

    The collision between the Indian and Eurasian plates formed the Himalayas, the largest orogenic belt on the Earth. The entire region accommodates shallow earthquakes, while intermediate-depth earthquakes are concentrated at the eastern and western Himalayan syntaxis. Here we investigate the focal depths, fault plane solutions, and source rupture process for three earthquake sequences, which are located at the western, central and eastern regions of the Himalayan orogenic belt. The Pamir-Hindu Kush region is located at the western Himalayan syntaxis and is characterized by extreme shortening of the upper crust and strong interaction of various layers of the lithosphere. Many shallow earthquakes occur on the Main Pamir Thrust at focal depths shallower than 20 km, while intermediate-deep earthquakes are mostly located below 75 km. Large intermediate-depth earthquakes occur frequently at the western Himalayan syntaxis about every 10 years on average. The 2015 Nepal earthquake is located in the central Himalayas. It is a typical megathrust earthquake that occurred on the shallow portion of the Main Himalayan Thrust (MHT). Many of the aftershocks are located above the MHT and illuminate faulting structures in the hanging wall with dip angles that are steeper than the MHT. These observations provide new constraints on the collision and uplift processes for the Himalaya orogenic belt. The Indo-Burma region is located south of the eastern Himalayan syntaxis, where the strike of the plate boundary suddenly changes from nearly east-west at the Himalayas to nearly north-south at the Burma Arc. The Burma arc subduction zone is a typical oblique plate convergence zone. The eastern boundary is the north-south striking dextral Sagaing fault, which hosts many shallow earthquakes with focal depth less than 25 km. In contrast, intermediate-depth earthquakes along the subduction zone reflect east-west trending reverse faulting.

  7. Earthquake damage to underground facilities and earthquake related displacement fields

    International Nuclear Information System (INIS)

    Pratt, H.R.; Stephenson, D.E.; Zandt, G.; Bouchon, M.; Hustrulid, W.A.

    1982-01-01

    The potential seismic risk for an underground facility is considered in the evaluation of its location and design. The possible damage resulting from either large-scale displacements or high accelerations should be considered in evaluating potential sites of underground facilities. Scattered through the available literature are statements to the effect that below a few hundred meters shaking and damage in mines is less than at the surface; however, data for decreased damage underground have not been completely reported or explained. In order to assess the seismic risk for an underground facility, a data base was established and analyzed to evaluate the potential for seismic disturbance. Substantial damage to underground facilities is usually the result of displacements primarily along pre-existing faults and fractures, or at the surface entrance to these facilities. Evidence of this comes from both earthquakes as a function of depth is important in the evaluation of the hazard to underground facilities. To evaluate potential displacements due to seismic effects of block motions along pre-existing or induced fractures, the displacement fields surrounding two types of faults were investigated. Analytical models were used to determine relative displacements of shafts and near-surface displacement of large rock masses. Numerical methods were used to determine the displacement fields associated with pure strike-slip and vertical normal faults. Results are presented as displacements for various fault lengths as a function of depth and distance. This provides input to determine potential displacements in terms of depth and distance for underground facilities, important for assessing potential sites and design parameters

  8. Long-term change of activity of very low-frequency earthquakes in southwest Japan

    Science.gov (United States)

    Baba, S.; Takeo, A.; Obara, K.; Kato, A.; Maeda, T.; Matsuzawa, T.

    2017-12-01

    On plate interface near seismogenic zone of megathrust earthquakes, various types of slow earthquakes were detected including non-volcanic tremors, slow slip events (SSEs) and very low-frequency earthquakes (VLFEs). VLFEs are classified into deep VLFEs, which occur in the downdip side of the seismogenic zone, and shallow VLFEs, occur in the updip side, i.e. several kilometers in depth in southwest Japan. As a member of slow earthquake family, VLFE activity is expected to be a proxy of inter-plate slipping because VLFEs have the same mechanisms as inter-plate slipping and are detected during Episodic tremor and slip (ETS). However, long-term change of the VLFE seismicity has not been well constrained compared to deep low-frequency tremor. We thus studied long-term changes in the activity of VLFEs in southwest Japan where ETS and long-term SSEs have been most intensive. We used continuous seismograms of F-net broadband seismometers operated by NIED from April 2004 to March 2017. After applying the band-pass filter with a frequency range of 0.02—0.05 Hz, we adopted the matched-filter technique in detecting VLFEs. We prepared templates by calculating synthetic waveforms for each hypocenter grid assuming typical focal mechanisms of VLFEs. The correlation coefficients between templates and continuous F-net seismograms were calculated at each grid every 1s in all components. The grid interval is 0.1 degree for both longitude and latitude. Each VLFE was detected as an event if the average of correlation coefficients exceeds the threshold. We defined the detection threshold as eight times as large as the median absolute deviation of the distribution. At grids in the Bungo channel, where long-term SSEs occurred frequently, the cumulative number of detected VLFEs increases rapidly in 2010 and 2014, which were modulated by stress loading from the long-term SSEs. At inland grids near the Bungo channel, the cumulative number increases steeply every half a year. This stepwise

  9. Characterizing the structural maturity of fault zones using high-resolution earthquake locations.

    Science.gov (United States)

    Perrin, C.; Waldhauser, F.; Scholz, C. H.

    2017-12-01

    We use high-resolution earthquake locations to characterize the three-dimensional structure of active faults in California and how it evolves with fault structural maturity. We investigate the distribution of aftershocks of several recent large earthquakes that occurred on immature faults (i.e., slow moving and small cumulative displacement), such as the 1992 (Mw7.3) Landers and 1999 (Mw7.1) Hector Mine events, and earthquakes that occurred on mature faults, such as the 1984 (Mw6.2) Morgan Hill and 2004 (Mw6.0) Parkfield events. Unlike previous studies which typically estimated the width of fault zones from the distribution of earthquakes perpendicular to the surface fault trace, we resolve fault zone widths with respect to the 3D fault surface estimated from principal component analysis of local seismicity. We find that the zone of brittle deformation around the fault core is narrower along mature faults compared to immature faults. We observe a rapid fall off of the number of events at a distance range of 70 - 100 m from the main fault surface of mature faults (140-200 m fault zone width), and 200-300 m from the fault surface of immature faults (400-600 m fault zone width). These observations are in good agreement with fault zone widths estimated from guided waves trapped in low velocity damage zones. The total width of the active zone of deformation surrounding the main fault plane reach 1.2 km and 2-4 km for mature and immature faults, respectively. The wider zone of deformation presumably reflects the increased heterogeneity in the stress field along complex and discontinuous faults strands that make up immature faults. In contrast, narrower deformation zones tend to align with well-defined fault planes of mature faults where most of the deformation is concentrated. Our results are in line with previous studies suggesting that surface fault traces become smoother, and thus fault zones simpler, as cumulative fault slip increases.

  10. Clinical characteristics of patients seizure following the 2016 Kumamoto earthquake.

    Science.gov (United States)

    Inatomi, Yuichiro; Nakajima, Makoto; Yonehara, Toshiro; Ando, Yukio

    2017-06-01

    To investigate the clinical characteristics of patients with seizure following the 2016 Kumamoto earthquake. We retrospectively studied patients with seizure admitted to our hospital for 12weeks following the earthquake. We compared the clinical backgrounds and characteristics of the patients: before (the same period from the previous 3years) and after the earthquake; and the early (first 2weeks) and late (subsequent 10weeks) phases. A total of 60 patients with seizure were admitted to the emergency room after the earthquake, and 175 (58.3/year) patients were admitted before the earthquake. Of them, 35 patients with seizure were hospitalized in the Department of Neurology after the earthquake, and 96 (32/year) patients were hospitalized before the earthquake. In patients after the earthquake, males and non-cerebrovascular diseases as an epileptogenic disease were seen more frequently than before the earthquake. During the early phase after the earthquake, female, first-attack, and non-focal-type patients were seen more frequently than during the late phase after the earthquake. These characteristics of patients with seizure during the early phase after the earthquake suggest that many patients had non-epileptic seizures. To prevent seizures following earthquakes, mental stress and physical status of evacuees must be assessed. Copyright © 2017. Published by Elsevier Ltd.

  11. Prevention of strong earthquakes: Goal or utopia?

    Science.gov (United States)

    Mukhamediev, Sh. A.

    2010-11-01

    In the present paper, we consider ideas suggesting various kinds of industrial impact on the close-to-failure block of the Earth’s crust in order to break a pending strong earthquake (PSE) into a number of smaller quakes or aseismic slips. Among the published proposals on the prevention of a forthcoming strong earthquake, methods based on water injection and vibro influence merit greater attention as they are based on field observations and the results of laboratory tests. In spite of this, the cited proofs are, for various reasons, insufficient to acknowledge the proposed techniques as highly substantiated; in addition, the physical essence of these methods has still not been fully understood. First, the key concept of the methods, namely, the release of the accumulated stresses (or excessive elastic energy) in the source region of a forthcoming strong earthquake, is open to objection. If we treat an earthquake as a phenomenon of a loss in stability, then, the heterogeneities of the physicomechanical properties and stresses along the existing fault or its future trajectory, rather than the absolute values of stresses, play the most important role. In the present paper, this statement is illustrated by the classical examples of stable and unstable fractures and by the examples of the calculated stress fields, which were realized in the source regions of the tsunamigenic earthquakes of December 26, 2004 near the Sumatra Island and of September 29, 2009 near the Samoa Island. Here, just before the earthquakes, there were no excessive stresses in the source regions. Quite the opposite, the maximum shear stresses τmax were close to their minimum value, compared to τmax in the adjacent territory. In the present paper, we provide quantitative examples that falsify the theory of the prevention of PSE in its current form. It is shown that the measures for the prevention of PSE, even when successful for an already existing fault, can trigger or accelerate a catastrophic

  12. The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China

    Science.gov (United States)

    Chen, Q.; Wang, K.

    2009-12-01

    Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

  13. Where was the 1898 Mare Island Earthquake? Insights from the 2014 South Napa Earthquake

    Science.gov (United States)

    Hough, S. E.

    2014-12-01

    The 2014 South Napa earthquake provides an opportunity to reconsider the Mare Island earthquake of 31 March 1898, which caused severe damage to buildings at a Navy yard on the island. Revising archival accounts of the 1898 earthquake, I estimate a lower intensity magnitude, 5.8, than the value in the current Uniform California Earthquake Rupture Forecast (UCERF) catalog (6.4). However, I note that intensity magnitude can differ from Mw by upwards of half a unit depending on stress drop, which for a historical earthquake is unknowable. In the aftermath of the 2014 earthquake, there has been speculation that apparently severe effects on Mare Island in 1898 were due to the vulnerability of local structures. No surface rupture has ever been identified from the 1898 event, which is commonly associated with the Hayward-Rodgers Creek fault system, some 10 km west of Mare Island (e.g., Parsons et al., 2003). Reconsideration of detailed archival accounts of the 1898 earthquake, together with a comparison of the intensity distributions for the two earthquakes, points to genuinely severe, likely near-field ground motions on Mare Island. The 2014 earthquake did cause significant damage to older brick buildings on Mare Island, but the level of damage does not match the severity of documented damage in 1898. The high intensity files for the two earthquakes are more over spatially shifted, with the centroid of the 2014 distribution near the town of Napa and that of the 1898 distribution near Mare Island, east of the Hayward-Rodgers Creek system. I conclude that the 1898 Mare Island earthquake was centered on or near Mare Island, possibly involving rupture of one or both strands of the Franklin fault, a low-slip-rate fault sub-parallel to the Rodgers Creek fault to the west and the West Napa fault to the east. I estimate Mw5.8 assuming an average stress drop; data are also consistent with Mw6.4 if stress drop was a factor of ≈3 lower than average for California earthquakes. I

  14. Tokai earthquakes and Hamaoka Nuclear Power Station

    International Nuclear Information System (INIS)

    Komura, Hiroo

    1981-01-01

    Kanto district and Shizuoka Prefecture are designated as ''Observation strengthening districts'', where the possibility of earthquake occurrence is high. Hamaoka Nuclear Power Station, Chubu Electric Power Co., Inc., is at the center of this district. Nuclear power stations are vulnerable to earthquakes, and if damages are caused by earthquakes in nuclear power plants, the most dreadful accidents may occur. The Chubu Electric Power Co. underestimates the possibility and scale of earthquakes and the estimate of damages, and has kept on talking that the rock bed of the power station site is strong, and there is not the fear of accidents. However the actual situation is totally different from this. The description about earthquakes and the rock bed in the application of the installation of No.3 plant was totally rewritten after two years safety examination, and the Ministry of International Trade and Industry approved the application in less than two weeks thereafter. The rock bed is geologically evaluated in this paper, and many doubtful points in the application are pointed out. In addition, there are eight active faults near the power station site. The aseismatic design of the Hamaoka Nuclear Power Station assumes the acceleration up to 400 gal, but it may not be enough. The Hamaoka Nuclear Power Station is intentionally neglected in the estimate of damages in Shizuoka Prefecture. (Kako, I.)

  15. Real-time earthquake data feasible

    Science.gov (United States)

    Bush, Susan

    Scientists agree that early warning devices and monitoring of both Hurricane Hugo and the Mt. Pinatubo volcanic eruption saved thousands of lives. What would it take to develop this sort of early warning and monitoring system for earthquake activity?Not all that much, claims a panel assigned to study the feasibility, costs, and technology needed to establish a real-time earthquake monitoring (RTEM) system. The panel, drafted by the National Academy of Science's Committee on Seismology, has presented its findings in Real-Time Earthquake Monitoring. The recently released report states that “present technology is entirely capable of recording and processing data so as to provide real-time information, enabling people to mitigate somewhat the earthquake disaster.” RTEM systems would consist of two parts—an early warning system that would give a few seconds warning before severe shaking, and immediate postquake information within minutes of the quake that would give actual measurements of the magnitude. At this time, however, this type of warning system has not been addressed at the national level for the United States and is not included in the National Earthquake Hazard Reduction Program, according to the report.

  16. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  17. Roaming earthquakes in China highlight midcontinental hazards

    Science.gov (United States)

    Liu, Mian; Wang, Hui

    2012-11-01

    Before dawn on 28 July 1976, a magnitude (M) 7.8 earthquake struck Tangshan, a Chinese industrial city only 150 kilometers from Beijing (Figure 1a). In a brief moment, the earthquake destroyed the entire city and killed more than 242,000 people [Chen et al., 1988]. More than 30 years have passed, and upon the ruins a new Tangshan city has been built. However, the memory of devastation remains fresh. For this reason, a sequence of recent small earthquakes in the Tangshan region, including an M 4.8 event on 28 May and an M 4.0 event on 18 June 2012, has caused widespread concerns and heated debate in China. In the science community, the debate is whether the recent Tangshan earthquakes are the aftershocks of the 1976 earthquake despite the long gap in time since the main shock or harbingers of a new period of active seismicity in Tangshan and the rest of North China, where seismic activity seems to fluctuate between highs and lows over periods of a few decades [Ma, 1989].

  18. Electrostatically actuated resonant switches for earthquake detection

    KAUST Repository

    Ramini, Abdallah H.

    2013-04-01

    The modeling and design of electrostatically actuated resonant switches (EARS) for earthquake and seismic applications are presented. The basic concepts are based on operating an electrically actuated resonator close to instability bands of frequency, where it is forced to collapse (pull-in) if operated within these bands. By careful tuning, the resonator can be made to enter the instability zone upon the detection of the earthquake signal, thereby pulling-in as a switch. Such a switching action can be functionalized for useful functionalities, such as shutting off gas pipelines in the case of earthquakes, or can be used to activate a network of sensors for seismic activity recording in health monitoring applications. By placing a resonator on a printed circuit board (PCB) of a natural frequency close to that of the earthquake\\'s frequency, we show significant improvement on the detection limit of the EARS lowering it considerably to less than 60% of the EARS by itself without the PCB. © 2013 IEEE.

  19. Earthquake risk assessment of building structures

    International Nuclear Information System (INIS)

    Ellingwood, Bruce R.

    2001-01-01

    During the past two decades, probabilistic risk analysis tools have been applied to assess the performance of new and existing building structural systems. Structural design and evaluation of buildings and other facilities with regard to their ability to withstand the effects of earthquakes requires special considerations that are not normally a part of such evaluations for other occupancy, service and environmental loads. This paper reviews some of these special considerations, specifically as they pertain to probability-based codified design and reliability-based condition assessment of existing buildings. Difficulties experienced in implementing probability-based limit states design criteria for earthquake are summarized. Comparisons of predicted and observed building damage highlight the limitations of using current deterministic approaches for post-earthquake building condition assessment. The importance of inherent randomness and modeling uncertainty in forecasting building performance is examined through a building fragility assessment of a steel frame with welded connections that was damaged during the Northridge Earthquake of 1994. The prospects for future improvements in earthquake-resistant design procedures based on a more rational probability-based treatment of uncertainty are examined

  20. Metrics for comparing dynamic earthquake rupture simulations

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.