WorldWideScience

Sample records for earthquake simulation network

  1. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  2. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  3. Centrality in earthquake multiplex networks

    Science.gov (United States)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  4. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    Science.gov (United States)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  5. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  6. Toward real-time regional earthquake simulation of Taiwan earthquakes

    Science.gov (United States)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  7. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  8. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    Science.gov (United States)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  9. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  10. Urban MEMS based seismic network for post-earthquakes rapid disaster assessment

    Science.gov (United States)

    D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe

    2014-05-01

    worship. The waveforms recorded could be promptly used to determine ground-shaking parameters, like peak ground acceleration/velocity/displacement, Arias and Housner intensity, that could be all used to create, few seconds after a strong earthquakes, shaking maps at urban scale. These shaking maps could allow to quickly identify areas of the town center that have had the greatest earthquake resentment. When a strong seismic event occur, the beginning of the ground motion observed at the site could be used to predict the ensuing ground motion at the same site and so to realize a short term earthquake early warning system. The data acquired after a moderate magnitude earthquake, would provide valuable information for the detail seismic microzonation of the area based on direct earthquake shaking observations rather than from a model-based or indirect methods. In this work, we evaluate the feasibility and effectiveness of such seismic network taking in to account both technological, scientific and economic issues. For this purpose, we have simulated the creation of a MEMS based urban seismic network in a medium size city. For the selected town, taking into account the instrumental specifics, the array geometry and the environmental noise, we investigated the ability of the planned network to detect and measure earthquakes of different magnitude generated from realistic near seismogentic sources.

  11. Testing the structure of earthquake networks from multivariate time series of successive main shocks in Greece

    Science.gov (United States)

    Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.

    2018-06-01

    The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.

  12. Preferential attachment in evolutionary earthquake networks

    Science.gov (United States)

    Rezaei, Soghra; Moghaddasi, Hanieh; Darooneh, Amir Hossein

    2018-04-01

    Earthquakes as spatio-temporal complex systems have been recently studied using complex network theory. Seismic networks are dynamical networks due to addition of new seismic events over time leading to establishing new nodes and links to the network. Here we have constructed Iran and Italy seismic networks based on Hybrid Model and testified the preferential attachment hypothesis for the connection of new nodes which states that it is more probable for newly added nodes to join the highly connected nodes comparing to the less connected ones. We showed that the preferential attachment is present in the case of earthquakes network and the attachment rate has a linear relationship with node degree. We have also found the seismic passive points, the most probable points to be influenced by other seismic places, using their preferential attachment values.

  13. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  14. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  15. Metrics for comparing dynamic earthquake rupture simulations

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  16. Earthquake correlations and networks: A comparative study

    Science.gov (United States)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  17. Earthquake evaluation of a substation network

    International Nuclear Information System (INIS)

    Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.

    1991-01-01

    The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system

  18. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  19. Evaluation and optimization of seismic networks and algorithms for earthquake early warning – the case of Istanbul (Turkey)

    OpenAIRE

    Oth, Adrien; Böse, Maren; Wenzel, Friedemann; Köhler, Nina; Erdik, Mustafa

    2010-01-01

    Earthquake early warning (EEW) systems should provide reliable warnings as quickly as possible with a minimum number of false and missed alarms. Using the example of the megacity Istanbul and based on a set of simulated scenario earthquakes, we present a novel approach for evaluating and optimizing seismic networks for EEW, in particular in regions with a scarce number of instrumentally recorded earthquakes. We show that, while the current station locations of the existing Istanbul EEW system...

  20. A decision support system for pre-earthquake planning of lifeline networks

    Energy Technology Data Exchange (ETDEWEB)

    Liang, J.W. [Tianjin Univ. (China). Dept. of Civil Engineering

    1996-12-01

    This paper describes the frame of a decision support system for pre-earthquake planning of gas and water networks. The system is mainly based on the earthquake experiences and lessons from the 1976 Tangshan earthquake. The objective of the system is to offer countermeasures and help make decisions for seismic strengthening, remaking, and upgrading of gas and water networks.

  1. Dynamic fracture network around faults: implications for earthquake ruptures, ground motion and energy budget

    Science.gov (United States)

    Okubo, K.; Bhat, H. S.; Rougier, E.; Lei, Z.; Knight, E. E.; Klinger, Y.

    2017-12-01

    Numerous studies have suggested that spontaneous earthquake ruptures can dynamically induce failure in secondary fracture network, regarded as damage zone around faults. The feedbacks of such fracture network play a crucial role in earthquake rupture, its radiated wave field and the total energy budget. A novel numerical modeling tool based on the combined finite-discrete element method (FDEM), which accounts for the main rupture propagation and nucleation/propagation of secondary cracks, was used to quantify the evolution of the fracture network and evaluate its effects on the main rupture and its associated radiation. The simulations were performed with the FDEM-based software tool, Hybrid Optimization Software Suite (HOSSedu) developed by Los Alamos National Laboratory. We first modeled an earthquake rupture on a planar strike-slip fault surrounded by a brittle medium where secondary cracks can be nucleated/activated by the earthquake rupture. We show that the secondary cracks are dynamically generated dominantly on the extensional side of the fault, mainly behind the rupture front, and it forms an intricate network of fractures in the damage zone. The rupture velocity thereby significantly decreases, by 10 to 20 percent, while the supershear transition length increases in comparison to the one with purely elastic medium. It is also observed that the high-frequency component (10 to 100 Hz) of the near-field ground acceleration is enhanced by the dynamically activated fracture network, consistent with field observations. We then conducted the case study in depth with various sets of initial stress state, and friction properties, to investigate the evolution of damage zone. We show that the width of damage zone decreases in depth, forming "flower-like" structure as the characteristic slip distance in linear slip-weakening law, or the fracture energy on the fault, is kept constant with depth. Finally, we compared the fracture energy on the fault to the energy

  2. A geographical and multi-criteria vulnerability assessment of transportation networks against extreme earthquakes

    International Nuclear Information System (INIS)

    Kermanshah, A.; Derrible, S.

    2016-01-01

    The purpose of this study is to provide a geographical and multi-criteria vulnerability assessment method to quantify the impacts of extreme earthquakes on road networks. The method is applied to two US cities, Los Angeles and San Francisco, both of which are susceptible to severe seismic activities. Aided by the recent proliferation of data and the wide adoption of Geography Information Systems (GIS), we use a data-driven approach using USGS ShakeMaps to determine vulnerable locations in road networks. To simulate the extreme earthquake, we remove road sections within “very strong” intensities provided by USGS. Subsequently, we measure vulnerability as a percentage drop in four families of metrics: overall properties (length of remaining system); topological indicators (betweenness centrality); accessibility; and travel demand using Longitudinal Employment Household Dynamics (LEHD) data. The various metrics are then plotted on a Vulnerability Surface (VS), from which the area can be assimilated to an overall vulnerability indicator. This VS approach offers a simple and pertinent method to capture the impacts of extreme earthquake. It can also be useful to planners to assess the robustness of various alternative scenarios in their plans to ensure that cities located in seismic areas are better prepared to face severe earthquakes. - Highlights: • Developed geographical and multi-criteria vulnerability assessment method. • Quantify the impacts of extreme earthquakes on transportation networks. • Data-driven approach using USGS ShakeMaps to determine vulnerable locations. • Measure vulnerability as a percentage drop in four families of metrics: ○Overall properties. ○Topological indicators. ○Accessibility. ○Travel demand using Longitudinal Employment Household Dynamics (LEHD) data. • Developed Vulnerability Surface (VS), a new pragmatic vulnerability indicator.

  3. MyShake: A smartphone seismic network for earthquake early warning and beyond.

    Science.gov (United States)

    Kong, Qingkai; Allen, Richard M; Schreier, Louis; Kwon, Young-Woo

    2016-02-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.

  4. Earthquake simulations with time-dependent nucleation and long-range interactions

    Directory of Open Access Journals (Sweden)

    J. H. Dieterich

    1995-01-01

    Full Text Available A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.

  5. Artificial earthquake record generation using cascade neural network

    Directory of Open Access Journals (Sweden)

    Bani-Hani Khaldoon A.

    2017-01-01

    Full Text Available This paper presents the results of using artificial neural networks (ANN in an inverse mapping problem for earthquake accelerograms generation. This study comprises of two parts: 1-D site response analysis; performed for Dubai Emirate at UAE, where eight earthquakes records are selected and spectral matching are performed to match Dubai response spectrum using SeismoMatch software. Site classification of Dubai soil is being considered for two classes C and D based on shear wave velocity of soil profiles. Amplifications factors are estimated to quantify Dubai soil effect. Dubai’s design response spectra are developed for site classes C & D according to International Buildings Code (IBC -2012. In the second part, ANN is employed to solve inverse mapping problem to generate time history earthquake record. Thirty earthquakes records and their design response spectrum with 5% damping are used to train two cascade forward backward neural networks (ANN1, ANN2. ANN1 is trained to map the design response spectrum to time history and ANN2 is trained to map time history records to the design response spectrum. Generalized time history earthquake records are generated using ANN1 for Dubai’s site classes C and D, and ANN2 is used to evaluate the performance of ANN1.

  6. US earthquake observatories: recommendations for a new national network

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    This report is the first attempt by the seismological community to rationalize and optimize the distribution of earthquake observatories across the United States. The main aim is to increase significantly our knowledge of earthquakes and the earth's dynamics by providing access to scientifically more valuable data. Other objectives are to provide a more efficient and cost-effective system of recording and distributing earthquake data and to make as uniform as possible the recording of earthquakes in all states. The central recommendation of the Panel is that the guiding concept be established of a rationalized and integrated seismograph system consisting of regional seismograph networks run for crucial regional research and monitoring purposes in tandem with a carefully designed, but sparser, nationwide network of technologically advanced observatories. Such a national system must be thought of not only in terms of instrumentation but equally in terms of data storage, computer processing, and record availability.

  7. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  8. The wireless networking system of Earthquake precursor mobile field observation

    Science.gov (United States)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within

  9. Earthquake Complex Network applied along the Chilean Subduction Zone.

    Science.gov (United States)

    Martin, F.; Pasten, D.; Comte, D.

    2017-12-01

    In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.

  10. Hybrid Simulations of the Broadband Ground Motions for the 2008 MS8.0 Wenchuan, China, Earthquake

    Science.gov (United States)

    Yu, X.; Zhang, W.

    2012-12-01

    The Ms8.0 Wenchuan earthquake occurred on 12 May 2008 at 14:28 Beijing Time. It is the largest event happened in the mainland of China since the 1976, Mw7.6, Tangshan earthquake. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and "quake lakes" which formed by landslide-induced reservoirs. These resulted in tremendous losses of life and property. Casualties numbered more than 80,000 people, and there were major economic losses. However, this earthquake is the first Ms 8 intraplate earthquake with good close fault strong motion coverage. Over four hundred strong motion stations of the National Strong Motion Observation Network System (NSMONS) recorded the mainshock. Twelve of them located within 20 km of the fault traces and another 33 stations located within 100 km. These observations, accompanying with the hundreds of GPS vectors and multiple ALOS INSAR images, provide an unprecedented opportunity to study the rupture process of such a great intraplate earthquake. In this study, we calculate broadband near-field ground motion synthetic waveforms of this great earthquake using a hybrid broadband ground-motion simulation methodology, which combines a deterministic approach at low frequencies (f < 1.0 Hz) with a theoretic Green's function calculation approach at high frequency ( ~ 10.0 Hz). The fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time that were obtained by an inversion kinematic source model. At the same time, based on the aftershock data, we analyze the site effects for the near-field stations. Frequency-dependent site-amplification values for each station are calculated using genetic algorithms. For the calculation of the synthetic waveforms, at first, we carry out simulations using the hybrid methodology for the frequency up to 10.0 Hz. Then, we consider for

  11. Aftershocks of the India Republic Day Earthquake: the MAEC/ISTAR Temporary Seismograph Network

    Science.gov (United States)

    Bodin, P.; Horton, S.; Johnston, A.; Patterson, G.; Bollwerk, J.; Rydelek, P.; Steiner, G.; McGoldrick, C.; Budhbhatti, K. P.; Shah, R.; Macwan, N.

    2001-05-01

    The MW=7.7 Republic Day (26 January, 2001) earthquake on the Kachchh in western India initiated a strong sequence of small aftershocks. Seventeen days following the mainshock, we deployed a network of portable digital event recorders as a cooperative project of the Mid America Earthquake Center in the US and the Institute for Scientific and Technological Advanced Research. Our network consisted of 8 event-triggered Kinemetrics K2 seismographs with 6 data channels (3 accelerometer, 3 Mark L-28/3d seismometer) sampled at 200 Hz, and one continuously-recording Guralp CMG40TD broad-band seismometer sampled at 220 Hz. This network was in place for 18 days. Underlying our network deployment was the notion that because of its tectonic and geologic setting the Republic Day earthquake and its aftershocks might have source and/or propagation characteristics common to earthquakes in stable continental plate-interiors rather than those on plate boundaries or within continental mobile belts. Thus, our goals were to provide data that could be used to compare the Republic Day earthquake with other earthquakes. In particular, the objectives of our network deployment were: (1) to characterize the spatial distribution and occurrence rates of aftershocks, (2) to examine source characteristics of the aftershocks (stress-drops, focal mechanisms), (3) to study the effect of deep unconsolidated sediment on wave propagation, and (4) to determine if other faults (notably the Allah Bundh) were simultaneously active. Most of our sites were on Jurassic bedrock, and all were either free-field, or on the floor of light structures built on rock or with a thin soil cover. However, one of our stations was on a section of unconsolidated sediments hundreds of meters thick adjacent to a site that was subjected to shaking-induced sediment liquefaction during the mainshock. The largest aftershock reported by global networks was an MW=5.9 event on January 28, prior to our deployment. The largest

  12. Broadband Ground Motion Observation and Simulation for the 2016 Kumamoto Earthquake

    Science.gov (United States)

    Miyake, H.; Chimoto, K.; Yamanaka, H.; Tsuno, S.; Korenaga, M.; Yamada, N.; Matsushima, T.; Miyakawa, K.

    2016-12-01

    During the 2016 Kumamoto earthquake, strong motion data were widely recorded by the permanent dense triggered strong motion network of K-NET/KiK-net and seismic intensity meters installed by local government and JMA. Seismic intensities close to the MMI 9-10 are recorded twice at the Mashiki town, and once at the Nishihara village and KiK-net Mashiki (KMMH16 ground surface). Near-fault records indicate extreme ground motion exceeding 400 cm/s in 5% pSv at a period of 1 s for the Mashiki town and 3-4 s for the Nishihara village. Fault parallel velocity components are larger between the Mashiki town and the Nishihara village, on the other hand, fault normal velocity components are larger inside the caldera of the Aso volcano. The former indicates rupture passed through along-strike stations, and the latter stations located at the forward rupture direction (e.g., Miyatake, 1999). In addition to the permanent observation, temporary continuous strong motion stations were installed just after the earthquake in the Kumamoto city, Mashiki town, Nishihara village, Minami-Aso village, and Aso town, (e.g., Chimoto et al., 2016; Tsuno et al., 2016; Yamanaka et al. 2016). This study performs to estimate strong motion generation areas for the 2016 Kumamoto earthquake sequence using the empirical Green's function method, then to simulate broadband ground motions for both the permanent and temporary strong motion stations. Currently the target period range is between 0.1 s to 5-10 s due to the signal-to-noise ratio of element earthquakes used for the empirical Green's functions. We also care fault dimension parameters N within 4 to 10 to avoid spectral sags and artificial periodicity. The simulated seismic intensities as well as fault normal and parallel velocity components will be discussed.

  13. The ordered network structure and prediction summary for M ≥ 7 earthquakes in Xinjiang region of China

    International Nuclear Information System (INIS)

    Men, Ke-Pei; Zhao, Kai

    2014-01-01

    M ≥ 7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a x k (k = 1, 2, 3), 11 ∝ 12 a, 41 ∝ 43 a, 18 ∝ 19 a, and 5 ∝ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019-2020 and 2025-2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  14. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    Science.gov (United States)

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  15. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  16. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  17. Assessing earthquake early warning using sparse networks in developing countries: Case study of the Kyrgyz Republic

    Science.gov (United States)

    Parolai, Stefano; Boxberger, Tobias; Pilz, Marco; Fleming, Kevin; Haas, Michael; Pittore, Massimiliano; Petrovic, Bojana; Moldobekov, Bolot; Zubovich, Alexander; Lauterjung, Joern

    2017-09-01

    The first real-time digital strong-motion network in Central Asia has been installed in the Kyrgyz Republic since 2014. Although this network consists of only 19 strong-motion stations, they are located in near-optimal locations for earthquake early warning and rapid response purposes. In fact, it is expected that this network, which utilizes the GFZ-Sentry software, allowing decentralized event assessment calculations, not only will provide useful strong motion data useful for improving future seismic hazard and risk assessment, but will serve as the backbone for regional and on-site earthquake early warning operations. Based on the location of these stations, and travel-time estimates for P- and S-waves, we have determined potential lead times for several major urban areas in Kyrgyzstan (i.e., Bishkek, Osh, and Karakol) and Kazakhstan (Almaty), where we find the implementation of an efficient earthquake early warning system would provide lead times outside the blind zone ranging from several seconds up to several tens of seconds. This was confirmed by the simulation of the possible shaking (and intensity) that would arise considering a series of scenarios based on historical and expected events, and how they affect the major urban centres. Such lead times would allow the instigation of automatic mitigation procedures, while the system as a whole would support prompt and efficient actions to be undertaken over large areas.

  18. Connection with seismic networks and construction of real time earthquake monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2000-12-15

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system.

  19. Connection with seismic networks and construction of real time earthquake monitoring system

    International Nuclear Information System (INIS)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S.

    2000-12-01

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system

  20. Scale-free networks of earthquakes and aftershocks

    International Nuclear Information System (INIS)

    Baiesi, Marco; Paczuski, Maya

    2004-01-01

    We propose a metric to quantify correlations between earthquakes. The metric consists of a product involving the time interval and spatial distance between two events, as well as the magnitude of the first one. According to this metric, events typically are strongly correlated to only one or a few preceding ones. Thus a classification of events as foreshocks, main shocks, or aftershocks emerges automatically without imposing predetermined space-time windows. In the simplest network construction, each earthquake receives an incoming link from its most correlated predecessor. The number of aftershocks for any event, identified by its outgoing links, is found to be scale free with exponent γ=2.0(1). The original Omori law with p=1 emerges as a robust feature of seismicity, holding up to years even for aftershock sequences initiated by intermediate magnitude events. The broad distribution of distances between earthquakes and their linked aftershocks suggests that aftershock collection with fixed space windows is not appropriate

  1. Dense Ocean Floor Network for Earthquakes and Tsunamis; DONET/ DONET2, Part2 -Development and data application for the mega thrust earthquakes around the Nankai trough-

    Science.gov (United States)

    Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group

    2010-12-01

    Yoshiyuki Kaneda Katsuyoshi Kawaguchi*, Eiichiro Araki*, Shou Kaneko*, Hiroyuki Matsumoto*, Takeshi Nakamura*, Masaru Nakano*, Shinichirou Kamiya*, Keisuke Ariyoshi*, Toshitaka Baba*, Michihiro Ohori*, Narumi Takakahashi*, and Takane Hori** * Earthquake and Tsunami Research Project for Disaster Prevention, Leading Project , Japan Agency for Marine-Earth Science and Technology (JAMSTEC) **Institute for Research on Earth Evolution, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) DONET (Dense Ocean Floor Network for Earthquakes and Tsunamis) is the real time monitoring system of the Tonankai seismogenic zones around the Nankai trough southwestern Japan. We were starting to develop DONET to perform real time monitoring of crustal activities over there and the advanced early warning system. DONET will provide important and useful data to understand the Nankai trough maga thrust earthquake seismogenic zones and to improve the accuracy of the earthquake recurrence cycle simulation. Details of DONET concept are as follows. 1) Redundancy, Extendable function and advanced maintenance system using the looped cable system, junction boxes and the ROV/AUV. DONET has 20 observatories and incorporated in a double land stations concept. Also, we are developed ROV for the 10km cable extensions and heavy weight operations. 2) Multi kinds of sensors to observe broad band phenomena such as long period tremors, very low frequency earthquakes and strong motions of mega thrust earthquakes over M8: Therefore, sensors such as a broadband seismometer, an accelerometer, a hydrophone, a precise pressure gauge, a differential pressure gauge and a thermometer are equipped with each observatory in DONET. 3) For speedy detections, evaluations and notifications of earthquakes and tsunamis: DONET system will be deployed around the Tonankai seismogenic zone. 4) Provide data of ocean floor crustal deformations derived from pressure sensors: Simultaneously, the development of data

  2. Building Capacity for Earthquake Monitoring: Linking Regional Networks with the Global Community

    Science.gov (United States)

    Willemann, R. J.; Lerner-Lam, A.

    2006-12-01

    Installing or upgrading a seismic monitoring network is often among the mitigation efforts after earthquake disasters, and this is happening in response to the events both in Sumatra during December 2004 and in Pakistan during October 2005. These networks can yield improved hazard assessment, more resilient buildings where they are most needed, and emergency relief directed more quickly to the worst hit areas after the next large earthquake. Several commercial organizations are well prepared for the fleeting opportunity to provide the instruments that comprise a seismic network, including sensors, data loggers, telemetry stations, and the computers and software required for the network center. But seismic monitoring requires more than hardware and software, no matter how advanced. A well-trained staff is required to select appropriate and mutually compatible components, install and maintain telemetered stations, manage and archive data, and perform the analyses that actually yield the intended benefits. Monitoring is more effective when network operators cooperate with a larger community through free and open exchange of data, sharing information about working practices, and international collaboration in research. As an academic consortium, a facility operator and a founding member of the International Federation of Digital Seismographic Networks, IRIS has access to a broad range of expertise with the skills that are required to help design, install, and operate a seismic network and earthquake analysis center, and stimulate the core training for the professional teams required to establish and maintain these facilities. But delivering expertise quickly when and where it is unexpectedly in demand requires advance planning and coordination in order to respond to the needs of organizations that are building a seismic network, either with tight time constraints imposed by the budget cycles of aid agencies following a disastrous earthquake, or as part of more informed

  3. Interactive visualization to advance earthquake simulation

    Science.gov (United States)

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  4. Earthquake simulation, actual earthquake monitoring and analytical methods for soil-structure interaction investigation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, H T [Seismic Center, Electric Power Research Institute, Palo Alto, CA (United States)

    1988-07-01

    Approaches for conducting in-situ soil-structure interaction experiments are discussed. High explosives detonated under the ground can generate strong ground motion to induce soil-structure interaction (SSI). The explosive induced data are useful in studying the dynamic characteristics of the soil-structure system associated with the inertial aspect of the SSI problem. The plane waves generated by the explosives cannot adequately address the kinematic interaction associated with actual earthquakes because of he difference in wave fields and their effects. Earthquake monitoring is ideal for obtaining SSI data that can address all aspects of the SSI problem. The only limitation is the level of excitation that can be obtained. Neither the simulated earthquake experiments nor the earthquake monitoring experiments can have exact similitude if reduced scale test structures are used. If gravity effects are small, reasonable correlations between the scaled model and the prototype can be obtained provided that input motion can be scaled appropriately. The key product of the in-situ experiments is the data base that can be used to qualify analytical methods for prototypical applications. (author)

  5. Hybrid Broadband Ground-Motion Simulation Using Scenario Earthquakes for the Istanbul Area

    KAUST Repository

    Reshi, Owais A.

    2016-04-13

    Seismic design, analysis and retrofitting of structures demand an intensive assessment of potential ground motions in seismically active regions. Peak ground motions and frequency content of seismic excitations effectively influence the behavior of structures. In regions of sparse ground motion records, ground-motion simulations provide the synthetic seismic records, which not only provide insight into the mechanisms of earthquakes but also help in improving some aspects of earthquake engineering. Broadband ground-motion simulation methods typically utilize physics-based modeling of source and path effects at low frequencies coupled with high frequency semi-stochastic methods. I apply the hybrid simulation method by Mai et al. (2010) to model several scenario earthquakes in the Marmara Sea, an area of high seismic hazard. Simulated ground motions were generated at 75 stations using systematically calibrated model parameters. The region-specific source, path and site model parameters were calibrated by simulating a w4.1 Marmara Sea earthquake that occurred on November 16, 2015 on the fault segment in the vicinity of Istanbul. The calibrated parameters were then used to simulate the scenario earthquakes with magnitudes w6.0, w6.25, w6.5 and w6.75 over the Marmara Sea fault. Effects of fault geometry, hypocenter location, slip distribution and rupture propagation were thoroughly studied to understand variability in ground motions. A rigorous analysis of waveforms reveal that these parameters are critical for determining the behavior of ground motions especially in the near-field. Comparison of simulated ground motion intensities with ground-motion prediction quations indicates the need of development of the region-specific ground-motion prediction equation for Istanbul area. Peak ground motion maps are presented to illustrate the shaking in the Istanbul area due to the scenario earthquakes. The southern part of Istanbul including Princes Islands show high amplitudes

  6. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  7. Damage Level Prediction of Reinforced Concrete Building Based on Earthquake Time History Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Suryanita Reni

    2017-01-01

    Full Text Available The strong motion earthquake could cause the building damage in case of the building not considered in the earthquake design of the building. The study aims to predict the damage-level of building due to earthquake using Artificial Neural Networks method. The building model is a reinforced concrete building with ten floors and height between floors is 3.6 m. The model building received a load of the earthquake based on nine earthquake time history records. Each time history scaled to 0,5g, 0,75g, and 1,0g. The Artificial Neural Networks are designed in 4 architectural models using the MATLAB program. Model 1 used the displacement, velocity, and acceleration as input and Model 2 used the displacement only as the input. Model 3 used the velocity as input, and Model 4 used the acceleration just as input. The output of the Neural Networks is the damage level of the building with the category of Safe (1, Immediate Occupancy (2, Life Safety (3 or in a condition of Collapse Prevention (4. According to the results, Neural Network models have the prediction rate of the damage level between 85%-95%. Therefore, one of the solutions for analyzing the structural responses and the damage level promptly and efficiently when the earthquake occurred is by using Artificial Neural Network

  8. Discrimination between earthquakes and chemical explosions using artificial neural networks

    International Nuclear Information System (INIS)

    Kundu, Ajit; Bhadauria, Y.S.; Roy, Falguni

    2012-05-01

    An Artificial Neural Network (ANN) for discriminating between earthquakes and chemical explosions located at epicentral distances, Δ <5 deg from Gauribidanur Array (GBA) has been developed using the short period digital seismograms recorded at GBA. For training the ANN spectral amplitude ratios between P and Lg phases computed at 13 different frequencies in the frequency range of 2-8 Hz, corresponding to 20 earthquakes and 23 chemical explosions were used along with other parameters like magnitude, epicentral distance and amplitude ratios Rg/P and Rg/Lg. After training and development, the ANN has correctly identified a set of 21 test events, comprising 6 earthquakes and 15 chemical explosions. (author)

  9. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  10. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    Science.gov (United States)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  11. Simulation analysis of earthquake response of nuclear power plant to the 2003 Miyagi-Oki earthquake

    International Nuclear Information System (INIS)

    Yoshihiro Ogata; Kiyoshi Hirotani; Masayuki Higuchi; Shingo Nakayama

    2005-01-01

    On May 26, 2003 an earthquake of magnitude scale 7.1 (Japan Meteorological Agency) occurred just offshore of Miyagi Prefecture. This was the largest earthquake ever experienced by the nuclear power plant of Tohoku Electric Power Co. in Onagawa (hereafter the Onagawa Nuclear Power Plant) during the 19 years since it had started operations in 1984. In this report, we review the vibration characteristics of the reactor building of the Onagawa Nuclear Power Plant Unit 1 based on acceleration records observed at the building, and give an account of a simulation analysis of the earthquake response carried out to ascertain the appropriateness of design procedure and a seismic safety of the building. (authors)

  12. Simulation and monitoring tools to protect disaster management facilities against earthquakes

    Science.gov (United States)

    Saito, Taiki

    2017-10-01

    The earthquakes that hit Kumamoto Prefecture in Japan on April 14 and 16, 2016 severely damaged over 180,000 houses, including over 8,000 that were completely destroyed and others that were partially damaged according to the Cabinet Office's report as of November 14, 2016 [1]. Following these earthquakes, other parts of the world have been struck by earthquakes including Italy and New Zealand as well as the central part of Tottori Prefecture in October, where the earthquake-induced collapse of buildings has led to severe damage and casualties. The earthquakes in Kumamoto Prefecture, in fact, damaged various disaster management facilities including Uto City Hall, which significantly hindered the city's evacuation and recovery operations. One of the most crucial issues in times of disaster is securing the functions of disaster management facilities such as city halls, hospitals and fire stations. To address this issue, seismic simulations are conducted on the East and the West buildings of Toyohashi City Hall using the analysis tool developed by the author, STERA_3D, with the data of the ground motion waveform prediction for the Nankai Trough earthquake provided by the Ministry of Land, Infrastructure, Transport and Tourism. As the result, it was found that the buildings have sufficient earthquake resistance. It turned out, however, that the west building is at risk for wall cracks or ceiling panel's collapse while in the east building, people would not be able to stand through the strong quakes of 7 on the seismic intensity scale and cabinets not secured to the floors or walls would fall over. Additionally, three IT strong-motion seismometers were installed in the city hall to continuously monitor vibrations. Every five minutes, the vibration data obtained by the seismometers are sent to the computers in Toyohashi University of Technology via the Internet for the analysis tools to run simulations in the cloud. If an earthquake strikes, it is able to use the results

  13. Tsunamigenic earthquake simulations using experimentally derived friction laws

    Science.gov (United States)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  14. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...... on an example considering a portfolio of reinforced concrete structures in a city located close to the western part of the North Anatolian Fault in Turkey....

  15. Incorporating Low-Cost Seismometers into the Central Weather Bureau Seismic Network for Earthquake Early Warning in Taiwan

    Directory of Open Access Journals (Sweden)

    Da-Yi Chen

    2015-01-01

    Full Text Available A dense seismic network can increase Earthquake Early Warning (EEW system capability to estimate earthquake information with higher accuracy. It is also critical for generating fast, robust earthquake alarms before strong-ground shaking hits the target area. However, building a dense seismic network via traditional seismometers is too expensive and may not be practical. Using low-cost Micro-Electro Mechanical System (MEMS accelerometers is a potential solution to quickly deploy a large number of sensors around the monitored region. An EEW system constructed using a dense seismic network with 543 MEMS sensors in Taiwan is presented. The system also incorporates the official seismic network of _ Central Weather Bureau (CWB. The real-time data streams generated by the two networks are integrated using the Earthworm software. This paper illustrates the methods used by the integrated system for estimating earthquake information and evaluates the system performance. We applied the Earthworm picker for the seismograms recorded by the MEMS sensors (Chen et al. 2015 following new picking constraints to accurately detect P-wave arrivals and use a new regression equation for estimating earthquake magnitudes. An off-line test was implemented using 46 earthquakes with magnitudes ranging from ML 4.5 - 6.5 to calibrate the system. The experimental results show that the integrated system has stable source parameter results and issues alarms much faster than the current system run by the CWB seismic network (CWBSN.

  16. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  17. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  18. Earthquake location determination using data from DOMERAPI and BMKG seismic networks: A preliminary result of DOMERAPI project

    Energy Technology Data Exchange (ETDEWEB)

    Ramdhan, Mohamad [Study Program of Earth Science, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Agency for Meteorology, Climatology and Geophysics of Indonesia (BMKG) Jl. Angkasa 1 No. 2 Kemayoran, Jakarta Pusat, 10720 (Indonesia); Nugraha, Andri Dian; Widiyantoro, Sri [Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institut TeknologiBandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Métaxian, Jean-Philippe [Institut de Recherche pour le Développement (IRD) (France); Valencia, Ayunda Aulia, E-mail: mohamad.ramdhan@bmkg.go.id [Study Program of Geophysical Engineering, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia)

    2015-04-24

    DOMERAPI project has been conducted to comprehensively study the internal structure of Merapi volcano, especially about deep structural features beneath the volcano. DOMERAPI earthquake monitoring network consists of 46 broad-band seismometers installed around the Merapi volcano. Earthquake hypocenter determination is a very important step for further studies, such as hypocenter relocation and seismic tomographic imaging. Ray paths from earthquake events occurring outside the Merapi region can be utilized to delineate the deep magma structure. Earthquakes occurring outside the DOMERAPI seismic network will produce an azimuthal gap greater than 180{sup 0}. Owing to this situation the stations from BMKG seismic network can be used jointly to minimize the azimuthal gap. We identified earthquake events manually and carefully, and then picked arrival times of P and S waves. The data from the DOMERAPI seismic network were combined with the BMKG data catalogue to determine earthquake events outside the Merapi region. For future work, we will also use the BPPTKG (Center for Research and Development of Geological Disaster Technology) data catalogue in order to study shallow structures beneath the Merapi volcano. The application of all data catalogues will provide good information as input for further advanced studies and volcano hazards mitigation.

  19. Social Media as Seismic Networks for the Earthquake Damage Assessment

    Science.gov (United States)

    Meletti, C.; Cresci, S.; La Polla, M. N.; Marchetti, A.; Tesconi, M.

    2014-12-01

    The growing popularity of online platforms, based on user-generated content, is gradually creating a digital world that mirrors the physical world. In the paradigm of crowdsensing, the crowd becomes a distributed network of sensors that allows us to understand real life events at a quasi-real-time rate. The SoS-Social Sensing project [http://socialsensing.it/] exploits the opportunistic crowdsensing, involving users in the sensing process in a minimal way, for social media emergency management purposes in order to obtain a very fast, but still reliable, detection of emergency dimension to face. First of all we designed and implemented a decision support system for the detection and the damage assessment of earthquakes. Our system exploits the messages shared in real-time on Twitter. In the detection phase, data mining and natural language processing techniques are firstly adopted to select meaningful and comprehensive sets of tweets. Then we applied a burst detection algorithm in order to promptly identify outbreaking seismic events. Using georeferenced tweets and reported locality names, a rough epicentral determination is also possible. The results, compared to Italian INGV official reports, show that the system is able to detect, within seconds, events of a magnitude in the region of 3.5 with a precision of 75% and a recall of 81,82%. We then focused our attention on damage assessment phase. We investigated the possibility to exploit social media data to estimate earthquake intensity. We designed a set of predictive linear models and evaluated their ability to map the intensity of worldwide earthquakes. The models build on a dataset of almost 5 million tweets exploited to compute our earthquake features, and more than 7,000 globally distributed earthquakes data, acquired in a semi-automatic way from USGS, serving as ground truth. We extracted 45 distinct features falling into four categories: profile, tweet, time and linguistic. We run diagnostic tests and

  20. A study on generation of simulated earthquake ground motion for seismic design of nuclear power plant

    International Nuclear Information System (INIS)

    Ichiki, Tadaharu; Matsumoto, Takuji; Kitada, Yoshio; Osaki, Yorihiko; Kanda, Jun; Masao, Toru.

    1985-01-01

    The aseismatic design of nuclear power generation facilities carried out in Japan at present must conform to the ''Guideline for aseismatic design examination regarding power reactor facilities'' decided by the Atomic Energy Commission in 1978. In this guideline, the earthquake motion used for the analysis of dynamic earthquake response is to be given in the form of the magnitude determined on the basis of the investigation of historical earthquakes and active faults around construction sites and the response spectra corresponding to the distance from epicenters. Accordingly when the analysis of dynamic earthquake response is actually carried out, the simulated earthquake motion made in conformity with these set up response spectra is used as the input earthquake motion for the design. For the purpose of establishing the techniques making simulated earthquake motion which is more appropriate and rational from engineering viewpoint, the research was carried out, and the results are summarized in this paper. The techniques for making simulated earthquake motion, the response of buildings and the response spectra of floors are described. (Kako, I.)

  1. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    Science.gov (United States)

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  2. Data quality of seismic records from the Tohoku, Japan earthquake as recorded across the Albuquerque Seismological Laboratory networks

    Science.gov (United States)

    Ringler, A.T.; Gee, L.S.; Marshall, B.; Hutt, C.R.; Storm, T.

    2012-01-01

    Great earthquakes recorded across modern digital seismographic networks, such as the recent Tohoku, Japan, earthquake on 11 March 2011 (Mw = 9.0), provide unique datasets that ultimately lead to a better understanding of the Earth's structure (e.g., Pesicek et al. 2008) and earthquake sources (e.g., Ammon et al. 2011). For network operators, such events provide the opportunity to look at the performance across their entire network using a single event, as the ground motion records from the event will be well above every station's noise floor.

  3. Numerical simulation of the 1976 Ms7.8 Tangshan Earthquake

    Science.gov (United States)

    Li, Zhengbo; Chen, Xiaofei

    2017-04-01

    An Ms 7.8 earthquake happened in Tangshan in 1976, causing more than 240000 people death and almost destroying the whole city. Numerous studies indicated that the surface rupture zone extends 8 to 11 km in the south of Tangshan City. The fault system is composed with more than ten NE-trending right-lateral strike-slip left-stepping echelon faults, with a general strike direction of N30°E. However, recent scholars proposed that the surface ruptures appeared in a larger area. To simulate the rupture process closer to the real situation, the curvilinear grid finite difference method presented by Zhang et al. (2006, 2014) which can handle the free surface and the complex geometry were implemented to investigate the dynamic rupture and ground motion of Tangshan earthquake. With the data from field survey, seismic section, borehole and trenching results given by different studies, several fault geometry models were established. The intensity, the seismic waveform and the displacement resulted from the simulation of different models were compared with the observed data. The comparison of these models shows details of the rupture process of the Tangshan earthquake and implies super-shear may occur during the rupture, which is important for better understanding of this complicated rupture process and seismic hazard distributions of this earthquake.

  4. ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location

    Science.gov (United States)

    Denolle, M.; Perol, T.; Gharbi, M.

    2017-12-01

    Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.

  5. Earthquake disaster simulation of civil infrastructures from tall buildings to urban areas

    CERN Document Server

    Lu, Xinzheng

    2017-01-01

    Based on more than 12 years of systematic investigation on earthquake disaster simulation of civil infrastructures, this book covers the major research outcomes including a number of novel computational models, high performance computing methods and realistic visualization techniques for tall buildings and urban areas, with particular emphasize on collapse prevention and mitigation in extreme earthquakes, earthquake loss evaluation and seismic resilience. Typical engineering applications to several tallest buildings in the world (e.g., the 632 m tall Shanghai Tower and the 528 m tall Z15 Tower) and selected large cities in China (the Beijing Central Business District, Xi'an City, Taiyuan City and Tangshan City) are also introduced to demonstrate the advantages of the proposed computational models and techniques. The high-fidelity computational model developed in this book has proven to be the only feasible option to date for earthquake-induced collapse simulation of supertall buildings that are higher than 50...

  6. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  7. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  8. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  9. Recognition of underground nuclear explosion and natural earthquake based on neural network

    International Nuclear Information System (INIS)

    Yang Hong; Jia Weimin

    2000-01-01

    Many features are extracted to improve the identified rate and reliability of underground nuclear explosion and natural earthquake. But how to synthesize these characters is the key of pattern recognition. Based on the improved Delta algorithm, features of underground nuclear explosion and natural earthquake are inputted into BP neural network, and friendship functions are constructed to identify the output values. The identified rate is up to 92.0%, which shows that: the way is feasible

  10. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  11. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  12. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  13. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    Science.gov (United States)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low

  14. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  15. Building Infrastructure for Preservation and Publication of Earthquake Engineering Research Data

    Directory of Open Access Journals (Sweden)

    Stanislav Pejša

    2014-10-01

    Full Text Available The objective of this paper is to showcase the progress of the earthquake engineering community during a decade-long effort supported by the National Science Foundation in the George E. Brown Jr., Network for Earthquake Engineering Simulation (NEES. During the four years that NEES network operations have been headquartered at Purdue University, the NEEScomm management team has facilitated an unprecedented cultural change in the ways research is performed in earthquake engineering. NEES has not only played a major role in advancing the cyberinfrastructure required for transformative engineering research, but NEES research outcomes are making an impact by contributing to safer structures throughout the USA and abroad. This paper reflects on some of the developments and initiatives that helped instil change in the ways that the earthquake engineering and tsunami community share and reuse data and collaborate in general.

  16. Insights into earthquake hazard map performance from shaking history simulations

    Science.gov (United States)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  17. Stochastic strong ground motion simulations for the intermediate-depth earthquakes of the south Aegean subduction zone

    Science.gov (United States)

    Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis

    2015-04-01

    We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the

  18. Swedish National Seismic Network (SNSN). A short report on recorded earthquakes during the fourth quarter of the year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Boedvarsson, Reynir (Uppsala Univ. (Sweden), Dept. of Earth Sciences)

    2011-01-15

    According to an agreement with Swedish Nuclear Fuel and Waste Management Company (SKB) and Uppsala Univ., the Dept. of Earth Sciences has continued to carry out observations of seismic events at seismic stations within the Swedish National Seismic Network (SNSN). This short report gives brief information about the recorded seismicity during October through December 2010. The Swedish National Seismic Network consists of 62 stations. During October through December, 2,241 events were located whereof 158 are estimated as real earthquakes, 1,457 are estimated as explosions, 444 are induced earthquakes in the vicinity of the mines in Kiruna and Malmberget and 182 events are still considered as uncertain but these are most likely explosions and are mainly located outside the network. One earthquake had a magnitude above M{sub L} = 2.0 during the period. In November one earthquake was located 13 km SW of Haernoesand with a magnitude of M{sub L} = 2.1. The largest earthquake in October had a magnitude of M{sub L} = 1.7 and was located 12 km NE of Eksjoe and in December an earthquake with a magnitude of M{sub L} = 1.8 was located 19 km north of Motala

  19. The Quake-Catcher Network: Improving Earthquake Strong Motion Observations Through Community Engagement

    Science.gov (United States)

    Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.; Chung, A. I.; Neighbors, C.; Saltzman, J.

    2010-12-01

    The Quake-Catcher Network (QCN) involves the community in strong motion data collection by utilizing volunteer computing techniques and low-cost MEMS accelerometers. Volunteer computing provides a mechanism to expand strong-motion seismology with minimal infrastructure costs, while promoting community participation in science. Micro-Electro-Mechanical Systems (MEMS) triaxial accelerometers can be attached to a desktop computer via USB and are internal to many laptops. Preliminary shake table tests show the MEMS accelerometers can record high-quality seismic data with instrument response similar to research-grade strong-motion sensors. QCN began distributing sensors and software to K-12 schools and the general public in April 2008 and has grown to roughly 1500 stations worldwide. We also recently tested whether sensors could be quickly deployed as part of a Rapid Aftershock Mobilization Program (RAMP) following the 2010 M8.8 Maule, Chile earthquake. Volunteers are recruited through media reports, web-based sensor request forms, as well as social networking sites. Using data collected to date, we examine whether a distributed sensing network can provide valuable seismic data for earthquake detection and characterization while promoting community participation in earthquake science. We utilize client-side triggering algorithms to determine when significant ground shaking occurs and this metadata is sent to the main QCN server. On average, trigger metadata are received within 1-10 seconds from the observation of a trigger; the larger data latencies are correlated with greater server-station distances. When triggers are detected, we determine if the triggers correlate to others in the network using spatial and temporal clustering of incoming trigger information. If a minimum number of triggers are detected then a QCN-event is declared and an initial earthquake location and magnitude is estimated. Initial analysis suggests that the estimated locations and magnitudes are

  20. The Great Maule earthquake: seismicity prior to and after the main shock from amphibious seismic networks

    Science.gov (United States)

    Lieser, K.; Arroyo, I. G.; Grevemeyer, I.; Flueh, E. R.; Lange, D.; Tilmann, F. J.

    2013-12-01

    The Chilean subduction zone is among the seismically most active plate boundaries in the world and its coastal ranges suffer from a magnitude 8 or larger megathrust earthquake every 10-20 years. The Constitución-Concepción or Maule segment in central Chile between ~35.5°S and 37°S was considered to be a mature seismic gap, rupturing last in 1835 and being seismically quiet without any magnitude 4.5 or larger earthquakes reported in global catalogues. It is located to the north of the nucleation area of the 1960 magnitude 9.5 Valdivia earthquake and to the south of the 1928 magnitude 8 Talca earthquake. On 27 February 2010 this segment ruptured in a Mw=8.8 earthquake, nucleating near 36°S and affecting a 500-600 km long segment of the margin between 34°S and 38.5°S. Aftershocks occurred along a roughly 600 km long portion of the central Chilean margin, most of them offshore. Therefore, a network of 30 ocean-bottom-seismometers was deployed in the northern portion of the rupture area for a three month period, recording local offshore aftershocks between 20 September 2010 and 25 December 2010. In addition, data of a network consisting of 33 landstations of the GeoForschungsZentrum Potsdam were included into the network, providing an ideal coverage of both the rupture plane and areas affected by post-seismic slip as deduced from geodetic data. Aftershock locations are based on automatically detected P wave onsets and a 2.5D velocity model of the combined on- and offshore network. Aftershock seismicity analysis in the northern part of the survey area reveals a well resolved seismically active splay fault in the accretionary prism of the Chilean forearc. Our findings imply that in the northernmost part of the rupture zone, co-seismic slip most likely propagated along the splay fault and not the subduction thrust fault. In addition, the updip limit of aftershocks along the plate interface can be verified to about 40 km landwards from the deformation front. Prior to

  1. On the reliability of Quake-Catcher Network earthquake detections

    Science.gov (United States)

    Yildirim, Battalgazi; Cochran, Elizabeth S.; Chung, Angela I.; Christensen, Carl M.; Lawrence, Jesse F.

    2015-01-01

    Over the past two decades, there have been several initiatives to create volunteer‐based seismic networks. The Personal Seismic Network, proposed around 1990, used a short‐period seismograph to record earthquake waveforms using existing phone lines (Cranswick and Banfill, 1990; Cranswicket al., 1993). NetQuakes (Luetgert et al., 2010) deploys triaxial Micro‐Electromechanical Systems (MEMS) sensors in private homes, businesses, and public buildings where there is an Internet connection. Other seismic networks using a dense array of low‐cost MEMS sensors are the Community Seismic Network (Clayton et al., 2012; Kohler et al., 2013) and the Home Seismometer Network (Horiuchi et al., 2009). One main advantage of combining low‐cost MEMS sensors and existing Internet connection in public and private buildings over the traditional networks is the reduction in installation and maintenance costs (Koide et al., 2006). In doing so, it is possible to create a dense seismic network for a fraction of the cost of traditional seismic networks (D’Alessandro and D’Anna, 2013; D’Alessandro, 2014; D’Alessandro et al., 2014).

  2. e-Science on Earthquake Disaster Mitigation by EUAsiaGrid

    Science.gov (United States)

    Yen, Eric; Lin, Simon; Chen, Hsin-Yen; Chao, Li; Huang, Bor-Shoh; Liang, Wen-Tzong

    2010-05-01

    Although earthquake is not predictable at this moment, with the aid of accurate seismic wave propagation analysis, we could simulate the potential hazards at all distances from possible fault sources by understanding the source rupture process during large earthquakes. With the integration of strong ground-motion sensor network, earthquake data center and seismic wave propagation analysis over gLite e-Science Infrastructure, we could explore much better knowledge on the impact and vulnerability of potential earthquake hazards. On the other hand, this application also demonstrated the e-Science way to investigate unknown earth structure. Regional integration of earthquake sensor networks could aid in fast event reporting and accurate event data collection. Federation of earthquake data center entails consolidation and sharing of seismology and geology knowledge. Capability building of seismic wave propagation analysis implies the predictability of potential hazard impacts. With gLite infrastructure and EUAsiaGrid collaboration framework, earth scientists from Taiwan, Vietnam, Philippine, Thailand are working together to alleviate potential seismic threats by making use of Grid technologies and also to support seismology researches by e-Science. A cross continental e-infrastructure, based on EGEE and EUAsiaGrid, is established for seismic wave forward simulation and risk estimation. Both the computing challenge on seismic wave analysis among 5 European and Asian partners, and the data challenge for data center federation had been exercised and verified. Seismogram-on-Demand service is also developed for the automatic generation of seismogram on any sensor point to a specific epicenter. To ease the access to all the services based on users workflow and retain the maximal flexibility, a Seismology Science Gateway integating data, computation, workflow, services and user communities would be implemented based on typical use cases. In the future, extension of the

  3. Slip reactivation model for the 2011 Mw9 Tohoku earthquake: Dynamic rupture, sea floor displacements and tsunami simulations.

    Science.gov (United States)

    Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.

    2014-12-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of

  4. Comparison of SISEC code simulations with earthquake data of ordinary and base-isolated buildings

    International Nuclear Information System (INIS)

    Wang, C.Y.; Gvildys, J.

    1991-01-01

    At Argonne National Laboratory (ANL), a 3-D computer program SISEC (Seismic Isolation System Evaluation Code) is being developed for simulating the system response of isolated and ordinary structures (Wang et al. 1991). This paper describes comparison of SISEC code simulations with building response data of actual earthquakes. To ensure the accuracy of analytical simulations, recorded data of full-size reinforced concrete structures located in Sendai, Japan are used in this benchmark comparison. The test structures consist of two three-story buildings, one base-isolated and the other one ordinary founded. They were constructed side by side to investigate the effect of base isolation on the acceleration response. Among 20 earthquakes observed since April 1989, complete records of three representative earthquakes, no.2, no.6, and no.17, are used for the code validation presented in this paper. Correlations of observed and calculated accelerations at all instrument locations are made. Also, relative response characteristics of ordinary and isolated building structures are investigated. (J.P.N.)

  5. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  6. Simulation of scenario earthquake influenced field by using GIS

    Science.gov (United States)

    Zuo, Hui-Qiang; Xie, Li-Li; Borcherdt, R. D.

    1999-07-01

    The method for estimating the site effect on ground motion specified by Borcherdt (1994a, 1994b) is briefly introduced in the paper. This method and the detail geological data and site classification data in San Francisco bay area of California, the United States, are applied to simulate the influenced field of scenario earthquake by GIS technology, and the software for simulating has been drawn up. The paper is a partial result of cooperative research project between China Seismological Bureau and US Geological Survey.

  7. Earthquake Monitoring: SeisComp3 at the Swiss National Seismic Network

    Science.gov (United States)

    Clinton, J. F.; Diehl, T.; Cauzzi, C.; Kaestli, P.

    2011-12-01

    The Swiss Seismological Service (SED) has an ongoing responsibility to improve the seismicity monitoring capability for Switzerland. This is a crucial issue for a country with low background seismicity but where a large M6+ earthquake is expected in the next decades. With over 30 stations with spacing of ~25km, the SED operates one of the densest broadband networks in the world, which is complimented by ~ 50 realtime strong motion stations. The strong motion network is expected to grow with an additional ~80 stations over the next few years. Furthermore, the backbone of the network is complemented by broadband data from surrounding countries and temporary sub-networks for local monitoring of microseismicity (e.g. at geothermal sites). The variety of seismic monitoring responsibilities as well as the anticipated densifications of our network demands highly flexible processing software. We are transitioning all software to the SeisComP3 (SC3) framework. SC3 is a fully featured automated real-time earthquake monitoring software developed by GeoForschungZentrum Potsdam in collaboration with commercial partner, gempa GmbH. It is in its core open source, and becoming a community standard software for earthquake detection and waveform processing for regional and global networks across the globe. SC3 was originally developed for regional and global rapid monitoring of potentially tsunamagenic earthquakes. In order to fulfill the requirements of a local network recording moderate seismicity, SED has tuned configurations and added several modules. In this contribution, we present our SC3 implementation strategy, focusing on the detection and identification of seismicity on different scales. We operate several parallel processing "pipelines" to detect and locate local, regional and global seismicity. Additional pipelines with lower detection thresholds can be defined to monitor seismicity within dense subnets of the network. To be consistent with existing processing

  8. Thermomechanical earthquake cycle simulations with rate-and-state friction and nonlinear viscoelasticity

    Science.gov (United States)

    Allison, K. L.; Dunham, E. M.

    2017-12-01

    We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the

  9. Crowd-Sourced Global Earthquake Early Warning

    Science.gov (United States)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  10. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    Science.gov (United States)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification

  11. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    Science.gov (United States)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  12. Tsunami Numerical Simulation for Hypothetical Giant or Great Earthquakes along the Izu-Bonin Trench

    Science.gov (United States)

    Harada, T.; Ishibashi, K.; Satake, K.

    2013-12-01

    We performed tsunami numerical simulations from various giant/great fault models along the Izu-Bonin trench in order to see the behavior of tsunamis originated in this region and to examine the recurrence pattern of great interplate earthquakes along the Nankai trough off southwest Japan. As a result, large tsunami heights are expected in the Ryukyu Islands and on the Pacific coasts of Kyushu, Shikoku and western Honshu. The computed large tsunami heights support the hypothesis that the 1605 Keicho Nankai earthquake was not a tsunami earthquake along the Nankai trough but a giant or great earthquake along the Izu-Bonin trench (Ishibashi and Harada, 2013, SSJ Fall Meeting abstract). The Izu-Bonin subduction zone has been regarded as so-called 'Mariana-type subduction zone' where M>7 interplate earthquakes do not occur inherently. However, since several M>7 outer-rise earthquakes have occurred in this region and the largest slip of the 2011 Tohoku earthquake (M9.0) took place on the shallow plate interface where the strain accumulation had considered to be a little, a possibility of M>8.5 earthquakes in this region may not be negligible. The latest M 7.4 outer-rise earthquake off the Bonin Islands on Dec. 22, 2010 produced small tsunamis on the Pacific coast of Japan except for the Tohoku and Hokkaido districts and a zone of abnormal seismic intensity in the Kanto and Tohoku districts. Ishibashi and Harada (2013) proposed a working hypothesis that the 1605 Keicho earthquake which is considered a great tsunami earthquake along the Nankai trough was a giant/great earthquake along the Izu-Bonin trench based on the similarity of the distributions of ground shaking and tsunami of this event and the 2010 Bonin earthquake. In this study, in order to examine the behavior of tsunamis from giant/great earthquakes along the Izu-Bonin trench and check the Ishibashi and Harada's hypothesis, we performed tsunami numerical simulations from fault models along the Izu-Bonin trench

  13. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  14. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    Science.gov (United States)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  15. Effects of earthquake rupture shallowness and local soil conditions on simulated ground motions

    International Nuclear Information System (INIS)

    Apsel, Randy J.; Hadley, David M.; Hart, Robert S.

    1983-03-01

    The paucity of strong ground motion data in the Eastern U.S. (EUS), combined with well recognized differences in earthquake source depths and wave propagation characteristics between Eastern and Western U.S. (WUS) suggests that simulation studies will play a key role in assessing earthquake hazard in the East. This report summarizes an extensive simulation study of 5460 components of ground motion representing a model parameter study for magnitude, distance, source orientation, source depth and near-surface site conditions for a generic EUS crustal model. The simulation methodology represents a hybrid approach to modeling strong ground motion. Wave propagation is modeled with an efficient frequency-wavenumber integration algorithm. The source time function used for each grid element of a modeled fault is empirical, scaled from near-field accelerograms. This study finds that each model parameter has a significant influence on both the shape and amplitude of the simulated response spectra. The combined effect of all parameters predicts a dispersion of response spectral values that is consistent with strong ground motion observations. This study provides guidelines for scaling WUS data from shallow earthquakes to the source depth conditions more typical in the EUS. The modeled site conditions range from very soft soil to hard rock. To the extent that these general site conditions model a specific site, the simulated response spectral information can be used to either correct spectra to a site-specific environment or used to compare expected ground motions at different sites. (author)

  16. Long-period ground motions at near-regional distances caused by the PL wave from, inland earthquakes: Observation and numerical simulation of the 2004 Mid-Niigata, Japan, Mw6.6 earthquake

    Science.gov (United States)

    Furumura, T.; Kennett, B. L. N.

    2017-12-01

    We examine the development of large, long-period ground motions at near-regional distances (D=50-200 km) generated by the PL wave from large, shallow inland earthquakes, based on the analysis of strong motion records and finite-difference method (FDM) simulations of seismic wave propagation. PL wave can be represented as leaking modes of the crustal waveguide and are commonly observed at regional distances between 300 to 1000 km as a dispersed, long-period signal with a dominant period of about 20 s. However, observations of recent earthquakes at the dense K-NET and KiK-net strong motion networks in Japan demonstrate the dominance of the PL wave at near-regional (D=50-200 km) distances as, e.g., for the 2004 Mid Niigata, Japan, earthquake (Mw6.6; h=13 km). The observed PL wave signal between P and S wave shows a large, dispersed wave packet with dominant period of about T=4-10 s with amplitude almost comparable to or larger than the later arrival of the S and surface waves. Thus, the early arrivals of the long-period PL wave immediately after P wave can enhance resonance with large-scale constructions such as high-rise buildings and large oil-storage tanks etc. with potential for disaster. Such strong effects often occurred during the 2004 Mid Niigata earthquakes and other large earthquakes which occurred nearby the Kanto (Tokyo) basin. FDM simulation of seismic wave propagation employing realistic 3-D sedimentary structure models demonstrates the process by which the PL wave develops at near-regional distances from shallow, crustal earthquakes by constructive interference of the P wave in the long-period band. The amplitude of the PL wave is very sensitive to low-velocity structure in the near-surface. Lowered velocities help to develop large SV-to-P conversion and weaken the P-to-SV conversion at the free surface. Both effects enhance the multiple P reflections in the crustal waveguide and prevent the leakage of seismic energy into the mantle. However, a very

  17. Quasi-dynamic versus fully dynamic simulations of earthquakes and aseismic slip with and without enhanced coseismic weakening

    Science.gov (United States)

    Thomas, Marion Y.; Lapusta, Nadia; Noda, Hiroyuki; Avouac, Jean-Philippe

    2014-03-01

    Physics-based numerical simulations of earthquakes and slow slip, coupled with field observations and laboratory experiments, can, in principle, be used to determine fault properties and potential fault behaviors. Because of the computational cost of simulating inertial wave-mediated effects, their representation is often simplified. The quasi-dynamic (QD) approach approximately accounts for inertial effects through a radiation damping term. We compare QD and fully dynamic (FD) simulations by exploring the long-term behavior of rate-and-state fault models with and without additional weakening during seismic slip. The models incorporate a velocity-strengthening (VS) patch in a velocity-weakening (VW) zone, to consider rupture interaction with a slip-inhibiting heterogeneity. Without additional weakening, the QD and FD approaches generate qualitatively similar slip patterns with quantitative differences, such as slower slip velocities and rupture speeds during earthquakes and more propensity for rupture arrest at the VS patch in the QD cases. Simulations with additional coseismic weakening produce qualitatively different patterns of earthquakes, with near-periodic pulse-like events in the FD simulations and much larger crack-like events accompanied by smaller events in the QD simulations. This is because the FD simulations with additional weakening allow earthquake rupture to propagate at a much lower level of prestress than the QD simulations. The resulting much larger ruptures in the QD simulations are more likely to propagate through the VS patch, unlike for the cases with no additional weakening. Overall, the QD approach should be used with caution, as the QD simulation results could drastically differ from the true response of the physical model considered.

  18. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  19. Network Structure and Community Evolution on Twitter: Human Behavior Change in Response to the 2011 Japanese Earthquake and Tsunami

    Science.gov (United States)

    Lu, Xin; Brelsford, Christa

    2014-10-01

    To investigate the dynamics of social networks and the formation and evolution of online communities in response to extreme events, we collected three datasets from Twitter shortly before and after the 2011 earthquake and tsunami in Japan. We find that while almost all users increased their online activity after the earthquake, Japanese speakers, who are assumed to be more directly affected by the event, expanded the network of people they interact with to a much higher degree than English speakers or the global average. By investigating the evolution of communities, we find that the behavior of joining or quitting a community is far from random: users tend to stay in their current status and are less likely to join new communities from solitary or shift to other communities from their current community. While non-Japanese speakers did not change their conversation topics significantly after the earthquake, nearly all Japanese users changed their conversations to earthquake-related content. This study builds a systematic framework for investigating human behaviors under extreme events with online social network data and our findings on the dynamics of networks and communities may provide useful insight for understanding how patterns of social interaction are influenced by extreme events.

  20. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  1. Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault

    Science.gov (United States)

    Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.

    2013-12-01

    We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann

  2. Finite element simulation of earthquake cycle dynamics for continental listric fault system

    Science.gov (United States)

    Wei, T.; Shen, Z. K.

    2017-12-01

    We simulate stress/strain evolution through earthquake cycles for a continental listric fault system using the finite element method. A 2-D lithosphere model is developed, with the upper crust composed of plasto-elastic materials and the lower crust/upper mantle composed of visco-elastic materials respectively. The media is sliced by a listric fault, which is soled into the visco-elastic lower crust at its downdip end. The system is driven laterally by constant tectonic loading. Slip on fault is controlled by rate-state friction. We start with a simple static/dynamic friction law, and drive the system through multiple earthquake cycles. Our preliminary results show that: (a) periodicity of the earthquake cycles is strongly modulated by the static/dynamic friction, with longer period correlated with higher static friction and lower dynamic friction; (b) periodicity of earthquake is a function of fault depth, with less frequent events of greater magnitudes occurring at shallower depth; and (c) rupture on fault cannot release all the tectonic stress in the system, residual stress is accumulated in the hanging wall block at shallow depth close to the fault, which has to be released either by conjugate faulting or inelastic folding. We are in a process of exploring different rheologic structure and friction laws and examining their effects on earthquake behavior and deformation pattern. The results will be applied to specific earthquakes and fault zones such as the 2008 great Wenchuan earthquake on the Longmen Shan fault system.

  3. Natural gas network resiliency to a "shakeout scenario" earthquake.

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James F.; Corbet, Thomas Frank,; Brooks, Robert E.

    2013-06-01

    A natural gas network model was used to assess the likely impact of a scenario San Andreas Fault earthquake on the natural gas network. Two disruption scenarios were examined. The more extensive damage scenario assumes the disruption of all three major corridors bringing gas into southern California. If withdrawals from the Aliso Canyon storage facility are limited to keep the amount of stored gas within historical levels, the disruption reduces Los Angeles Basin gas supplies by 50%. If Aliso Canyon withdrawals are only constrained by the physical capacity of the storage system to withdraw gas, the shortfall is reduced to 25%. This result suggests that it is important for stakeholders to put agreements in place facilitating the withdrawal of Aliso Canyon gas in the event of an emergency.

  4. Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation

    Directory of Open Access Journals (Sweden)

    Ming-Da Chiou

    2013-01-01

    Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.

  5. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    Energy Technology Data Exchange (ETDEWEB)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com [Master Program of Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia); Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id [Global Geophysical Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia)

    2015-04-24

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  6. Far-field tsunami of 2017 Mw 8.1 Tehuantepec, Mexico earthquake recorded by Chilean tide gauge network: Implications for tsunami warning systems

    Science.gov (United States)

    González-Carrasco, J. F.; Benavente, R. F.; Zelaya, C.; Núñez, C.; Gonzalez, G.

    2017-12-01

    The 2017 Mw 8.1, Tehuantepec earthquake generated a moderated tsunami, which was registered in near-field tide gauges network activating a tsunami threat state for Mexico issued by PTWC. In the case of Chile, the forecast of tsunami waves indicate amplitudes less than 0.3 meters above the tide level, advising an informative state of threat, without activation of evacuation procedures. Nevertheless, during sea level monitoring of network we detect wave amplitudes (> 0.3 m) indicating a possible change of threat state. Finally, NTWS maintains informative level of threat based on mathematical filtering analysis of sea level records. After 2010 Mw 8.8, Maule earthquake, the Chilean National Tsunami Warning System (NTWS) has increased its observational capabilities to improve early response. Most important operational efforts have focused on strengthening tide gauge network for national area of responsibility. Furthermore, technological initiatives as Integrated Tsunami Prediction and Warning System (SIPAT) has segmented the area of responsibility in blocks to focus early warning and evacuation procedures on most affected coastal areas, while maintaining an informative state for distant areas of near-field earthquake. In the case of far-field events, NTWS follow the recommendations proposed by Pacific Tsunami Warning Center (PTWC), including a comprehensive monitoring of sea level records, such as tide gauges and DART (Deep-Ocean Assessment and Reporting of Tsunami) buoys, to evaluate the state of tsunami threat in the area of responsibility. The main objective of this work is to analyze the first-order physical processes involved in the far-field propagation and coastal impact of tsunami, including implications for decision-making of NTWS. To explore our main question, we construct a finite-fault model of the 2017, Mw 8.1 Tehuantepec earthquake. We employ the rupture model to simulate a transoceanic tsunami modeled by Neowave2D. We generate synthetic time series at

  7. CAISSON: Interconnect Network Simulator

    Science.gov (United States)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  8. On the agreement between small-world-like OFC model and real earthquakes

    International Nuclear Information System (INIS)

    Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo

    2015-01-01

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places

  9. On the agreement between small-world-like OFC model and real earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)

    2015-03-20

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.

  10. Message network simulation

    OpenAIRE

    Shih, Kuo-Tung

    1990-01-01

    Approved for public release, distribution is unlimited This thesis presents a computer simulation of a multinode data communication network using a virtual network model to determine the effects of various system parameters on overall network performance. Lieutenant Commander, Republic of China (Taiwan) Navy

  11. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    Science.gov (United States)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  12. Earthquake source parameters along the Hellenic subduction zone and numerical simulations of historical tsunamis in the Eastern Mediterranean

    Science.gov (United States)

    Yolsal-Çevikbilen, Seda; Taymaz, Tuncay

    2012-04-01

    We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop

  13. DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE

    Science.gov (United States)

    Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi

    In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.

  14. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    Science.gov (United States)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  15. The Central and Eastern European Earthquake Research Network - CE3RN

    Science.gov (United States)

    Bragato, Pier Luigi; Costa, Giovanni; Gallo, Antonella; Gosar, Andrej; Horn, Nikolaus; Lenhardt, Wolfgang; Mucciarelli, Marco; Pesaresi, Damiano; Steiner, Rudolf; Suhadolc, Peter; Tiberi, Lara; Živčić, Mladen; Zoppé, Giuliana

    2014-05-01

    The region of the Central and Eastern Europe is an area characterised by a relatively high seismicity. The active seismogenic structures and the related potentially destructive events are located in the proximity of the political boundaries between several countries existing in the area. An example is the seismic region between the NE Italy (FVG, Trentino-Alto Adige and Veneto), Austria (Tyrol, Carinthia) and Slovenia. So when a destructive earthquake occurs in the area, all the three countries are involved. In the year 2001 the Agencija Republike Slovenije za Okolje (ARSO) in Slovenia, the Department of Mathematics and Geoscience of the University of Trieste (DMG), the OGS (Istituto Nazionale di Oceanografia e di Geofisica Sperimentale) in Italy and the Zentralanstalt für Meteorologie und Geodynamik (ZAMG) in Austria signed an agreement for the real-time seismological data exchange in the Southeastern Alps region. Soon after the Interreg IIIa Italia-Austria projects "Trans-National Seismological Networks in the South-Eastern Alps" and "FASTLINK" started. The main goal of these projects was the creation of a transfrontier network for the common seismic monitoring of the region for scientific and civil defense purposes. During these years the high quality data recorded by the transfrontier network has been used, by the involved institutions, for their scientific research, for institutional activities and for the civil defense services. Several common international projects have been realized with success. The instrumentation has been continuously upgraded, the installations quality improved as well as the data transmission efficiency. In the 2013 ARSO, DMG, OGS and ZAMG decided to name the cooperative network "Central and Eastern European Earthquake Research Network - CE3RN". The national/regional seismic networks actually involved in the CE3RN network are: • Austrian national BB network (ZAMG - OE) • Friuli Veneto SP network (OGS - FV) • Friuli VG

  16. Wireless network simulation - Your window on future network performance

    NARCIS (Netherlands)

    Fledderus, E.

    2005-01-01

    The paper describes three relevant perspectives on current wireless simulation practices. In order to obtain the key challenges for future network simulations, the characteristics of "beyond 3G" networks are described, including their impact on simulation.

  17. Rapid earthquake characterization using MEMS accelerometers and volunteer hosts following the M 7.2 Darfield, New Zealand, Earthquake

    Science.gov (United States)

    Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.

    2014-01-01

    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.

  18. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  19. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  20. S-net : Construction of large scale seafloor observatory network for tsunamis and earthquakes along the Japan Trench

    Science.gov (United States)

    Mochizuki, M.; Uehira, K.; Kanazawa, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Nakamura, T.; Shinohara, M.; Yamada, T.

    2017-12-01

    NIED has launched the project of constructing a seafloor observatory network for tsunamis and earthquakes after the occurrence of the 2011 Tohoku Earthquake to enhance reliability of early warnings of tsunamis and earthquakes. The observatory network was named "S-net". The S-net project has been financially supported by MEXT.The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,500 km. The S-net covers the focal region of the 2011 Tohoku Earthquake and its vicinity regions. Each observatory equips two units of a high sensitive pressure gauges as a tsunami meter and four sets of three-component seismometers. The S-net is composed of six segment networks. Five of six segment networks had been already installed. Installation of the last segment network covering the outer rise area have been finally finished by the end of FY2016. The outer rise segment has special features like no other five segments of the S-net. Those features are deep water and long distance. Most of 25 observatories on the outer rise segment are located at the depth of deeper than 6,000m WD. Especially, three observatories are set on the seafloor of deeper than about 7.000m WD, and then the pressure gauges capable of being used even at 8,000m WD are equipped on those three observatories. Total length of the submarine cables of the outer rise segment is about two times longer than those of the other segments. The longer the cable system is, the higher voltage supply is needed, and thus the observatories on the outer rise segment have high withstanding voltage characteristics. We employ a dispersion management line of a low loss formed by combining a plurality of optical fibers for the outer rise segment cable, in order to achieve long-distance, high-speed and large-capacity data transmission Installation of the outer rise segment was finished and then full-scale operation of S-net has started

  1. The 2008 West Bohemia earthquake swarm in the light of the WEBNET network

    Czech Academy of Sciences Publication Activity Database

    Fischer, T.; Horálek, Josef; Michálek, Jan; Boušková, Alena

    2010-01-01

    Roč. 14, č. 4 (2010), s. 665-682 ISSN 1383-4649 Grant - others:GA MŠk(CZ) specifický-výzkum; Norway Grants(NO) A/CZ0046/2/0015 Institutional research plan: CEZ:AV0Z30120515 Keywords : earthquake swarm * seismic network * seismicity Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.274, year: 2010

  2. Visualization of strong around motion calculated from the numerical simulation of Hyogo-ken Nanbu earthquake; Suchi simulation de miru Hyogoken nanbu jishin no kyoshindo

    Energy Technology Data Exchange (ETDEWEB)

    Furumura, T [Hokkaido Univ. of Education, Sapporo (Japan); Koketsu, K [The University of Tokyo, Tokyo (Japan). Earthquake Research Institute

    1996-10-01

    Hyogo-ken Nanbu earthquake with a focus in the Akashi straits has given huge earthquake damages in and around Awaji Island and Kobe City in 1995. It is clear that the basement structure, which is steeply deepened at Kobe City from Rokko Mountains towards the coast, and the focus under this related closely to the local generation of strong ground motion. Generation process of the strong ground motion was discussed using 2D and 3D numerical simulation methods. The 3D pseudospectral method was used for the calculation. Space of 51.2km{times}25.6km{times}25.6km was selected for the calculation. This space was discretized with the lattice interval of 200m. Consequently, it was found that the basement structure with a steeply deepened basement, soft and weak geological structure thickly deposited on the basement, and earthquake faults running under the boundary of base rock and sediments related greatly to the generation of strong ground motion. Numerical simulation can be expected to predict the strong ground motion by shallow earthquakes. 9 refs., 7 figs.

  3. Three dimensional viscoelastic simulation on dynamic evolution of stress field in North China induced by the 1966 Xingtai earthquake

    Science.gov (United States)

    Chen, Lian-Wang; Lu, Yuan-Zhong; Liu, Jie; Guo, Ruo-Mei

    2001-09-01

    Using three dimensional (3D) viscoelastic finite element method (FEM) we study the dynamic evolution pattern of the coseismic change of Coulomb failure stress and postseismic change, on time scale of hundreds years, of rheological effect induced by the M S=7.2 Xingtai earthquake on March 22, 1966. Then, we simulate the coseismic disturbance in stress field in North China and dynamic change rate on one-year scale caused by the Xingtai earthquake and Tangshan earthquake during 15 years from 1966 to 1980. Finally, we discuss the triggering of a strong earthquake to another future strong earthquake.

  4. The results of the pilot project in Georgia to install a network of electromagnetic radiation before the earthquake

    Science.gov (United States)

    Machavariani, Kakhaber; Khazaradze, Giorgi; Turazashvili, Ioseb; Kachakhidze, Nino; Kachakhidze, Manana; Gogoberidze, Vitali

    2016-04-01

    The world's scientific literature recently published many very important and interesting works of VLF / LF electromagnetic emissions, which is observed in the process of earthquake preparation. This works reliable earthquake prediction in terms of trends. Because, Georgia is located in Trans Asian earthquake zone, VLF / LF electromagnetic emissions network are essential. In this regard, it was possible to take first steps. It is true that our university has Shota Rustaveli National Science Foundation № DI / 21 / 9-140 / 13 grant, which included the installation of a receiver in Georgia, but failed due to lack of funds to buy this device. However, European friends helped us (Prof. Dr. PF Biagi and Prof. Dr. Aydın BÜYÜKSARAÇ) and made possible the installation of a receiver. Turkish scientists expedition in Georgia was organized in August 2015. They brought with them VLF / LF electromagnetic emissions receiver and together with Georgian scientists install near Tbilisi. The station was named GEO-TUR. It should be noted that Georgia was involved in the work of the European network. It is possible to completely control the earthquake in Georgia in terms of electromagnetic radiation. This enables scientists to obtain the relevant information not only on the territory of our country, but also on seismically active European countries as well. In order to maintain and develop our country in this new direction, it is necessary to keep independent group of scientists who will learn electromagnetic radiation ahead of an earthquake in Georgia. At this stage, we need to remedy this shortcoming, it is necessary and appropriate specialists to Georgia to engage in a joint international research. The work is carried out in the frame of grant (DI/21/9-140/13 „Pilot project of before earthquake detected Very Low Frequency/Low Frequency electromagnetic emission network installation in Georgia") by financial support of Shota Rustaveli National Science Foundation.

  5. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  6. Insuring against earthquakes: simulating the cost-effectiveness of disaster preparedness.

    Science.gov (United States)

    de Hoop, Thomas; Ruben, Ruerd

    2010-04-01

    Ex-ante measures to improve risk preparedness for natural disasters are generally considered to be more effective than ex-post measures. Nevertheless, most resources are allocated after an event in geographical areas that are vulnerable to natural disasters. This paper analyses the cost-effectiveness of ex-ante adaptation measures in the wake of earthquakes and provides an assessment of the future role of private and public agencies in disaster risk management. The study uses a simulation model approach to evaluate consumption losses after earthquakes under different scenarios of intervention. Particular attention is given to the role of activity diversification measures in enhancing disaster preparedness and the contributions of (targeted) microcredit and education programmes for reconstruction following a disaster. Whereas the former measures are far more cost-effective, missing markets and perverse incentives tend to make ex-post measures a preferred option, thus occasioning underinvestment in ex-ante adaptation initiatives.

  7. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    Science.gov (United States)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  8. The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network

    Science.gov (United States)

    Chen, M.; Wang, X.; Dou, A.; Wu, X.

    2018-04-01

    The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.

  9. Estimation of slip scenarios of mega-thrust earthquakes and strong motion simulations for Central Andes, Peru

    Science.gov (United States)

    Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.

    2012-12-01

    We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for

  10. Strong Motion Network of Medellín and Aburrá Valley: technical advances, seismicity records and micro-earthquake monitoring

    Science.gov (United States)

    Posada, G.; Trujillo, J. C., Sr.; Hoyos, C.; Monsalve, G.

    2017-12-01

    The tectonics setting of Colombia is determined by the interaction of Nazca, Caribbean and South American plates, together with the Panama-Choco block collision, which makes a seismically active region. Regional seismic monitoring is carried out by the National Seismological Network of Colombia and the Accelerometer National Network of Colombia. Both networks calculate locations, magnitudes, depths and accelerations, and other seismic parameters. The Medellín - Aburra Valley is located in the Northern segment of the Central Cordillera of Colombia, and according to the Colombian technical seismic norm (NSR-10), is a region of intermediate hazard, because of the proximity to seismic sources of the Valley. Seismic monitoring in the Aburra Valley began in 1996 with an accelerometer network which consisted of 38 instruments. Currently, the network consists of 26 stations and is run by the Early Warning System of Medellin and Aburra Valley (SIATA). The technical advances have allowed the real-time communication since a year ago, currently with 10 stations; post-earthquake data is processed through operationally near-real-time, obtaining quick results in terms of location, acceleration, spectrum response and Fourier analysis; this information is displayed at the SIATA web site. The strong motion database is composed by 280 earthquakes; this information is the basis for the estimation of seismic hazards and risk for the region. A basic statistical analysis of the main information was carried out, including the total recorded events per station, natural frequency, maximum accelerations, depths and magnitudes, which allowed us to identify the main seismic sources, and some seismic site parameters. With the idea of a more complete seismic monitoring and in order to identify seismic sources beneath the Valley, we are in the process of installing 10 low-cost shake seismometers for micro-earthquake monitoring. There is no historical record of earthquakes with a magnitude

  11. NEAR REAL-TIME DETERMINATION OF EARTHQUAKE SOURCE PARAMETERS FOR TSUNAMI EARLY WARNING FROM GEODETIC OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    S. Manneela

    2016-06-01

    Full Text Available Exemplifying the tsunami source immediately after an earthquake is the most critical component of tsunami early warning, as not every earthquake generates a tsunami. After a major under sea earthquake, it is very important to determine whether or not it has actually triggered the deadly wave. The near real-time observations from near field networks such as strong motion and Global Positioning System (GPS allows rapid determination of fault geometry. Here we present a complete processing chain of Indian Tsunami Early Warning System (ITEWS, starting from acquisition of geodetic raw data, processing, inversion and simulating the situation as it would be at warning center during any major earthquake. We determine the earthquake moment magnitude and generate the centroid moment tensor solution using a novel approach which are the key elements for tsunami early warning. Though the well established seismic monitoring network, numerical modeling and dissemination system are currently capable to provide tsunami warnings to most of the countries in and around the Indian Ocean, the study highlights the critical role of geodetic observations in determination of tsunami source for high-quality forecasting.

  12. Neural network based tomographic approach to detect earthquake-related ionospheric anomalies

    Directory of Open Access Journals (Sweden)

    S. Hirooka

    2011-08-01

    Full Text Available A tomographic approach is used to investigate the fine structure of electron density in the ionosphere. In the present paper, the Residual Minimization Training Neural Network (RMTNN method is selected as the ionospheric tomography with which to investigate the detailed structure that may be associated with earthquakes. The 2007 Southern Sumatra earthquake (M = 8.5 was selected because significant decreases in the Total Electron Content (TEC have been confirmed by GPS and global ionosphere map (GIM analyses. The results of the RMTNN approach are consistent with those of TEC approaches. With respect to the analyzed earthquake, we observed significant decreases at heights of 250–400 km, especially at 330 km. However, the height that yields the maximum electron density does not change. In the obtained structures, the regions of decrease are located on the southwest and southeast sides of the Integrated Electron Content (IEC (altitudes in the range of 400–550 km and on the southern side of the IEC (altitudes in the range of 250–400 km. The global tendency is that the decreased region expands to the east with increasing altitude and concentrates in the Southern hemisphere over the epicenter. These results indicate that the RMTNN method is applicable to the estimation of ionospheric electron density.

  13. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    Science.gov (United States)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  14. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  15. Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau

    Science.gov (United States)

    Dong, Peiyu; Hu, Caibo; Shi, Yaolin

    2015-04-01

    The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be

  16. Object-Oriented Analysis of Satellite Images Using Artificial Neural Networks for Post-Earthquake Buildings Change Detection

    Science.gov (United States)

    Khodaverdi zahraee, N.; Rastiveis, H.

    2017-09-01

    Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.

  17. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    Science.gov (United States)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  18. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  19. Efficient Allocation of Resources for Defense of Spatially Distributed Networks Using Agent-Based Simulation.

    Science.gov (United States)

    Kroshl, William M; Sarkani, Shahram; Mazzuchi, Thomas A

    2015-09-01

    This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent-based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg "leader follower" game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent-based simulation. The evolutionary agent-based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent-based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent-based approach results in a greater percentage of defender victories than does the PRA-based approach. © 2015 Society for Risk Analysis.

  20. Accessing northern California earthquake data via Internet

    Science.gov (United States)

    Romanowicz, Barbara; Neuhauser, Douglas; Bogaert, Barbara; Oppenheimer, David

    The Northern California Earthquake Data Center (NCEDC) provides easy access to central and northern California digital earthquake data. It is located at the University of California, Berkeley, and is operated jointly with the U.S. Geological Survey (USGS) in Menlo Park, Calif., and funded by the University of California and the National Earthquake Hazard Reduction Program. It has been accessible to users in the scientific community through Internet since mid-1992.The data center provides an on-line archive for parametric and waveform data from two regional networks: the Northern California Seismic Network (NCSN) operated by the USGS and the Berkeley Digital Seismic Network (BDSN) operated by the Seismographic Station at the University of California, Berkeley.

  1. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  2. A minimalist model of characteristic earthquakes

    DEFF Research Database (Denmark)

    Vázquez-Prada, M.; González, Á.; Gómez, J.B.

    2002-01-01

    In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....

  3. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  4. Using Smartphones to Detect Earthquakes

    Science.gov (United States)

    Kong, Q.; Allen, R. M.

    2012-12-01

    We are using the accelerometers in smartphones to record earthquakes. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. Given the potential number of smartphones, and small separation of sensors, this new type of seismic dataset has significant potential provides that the signal can be separated from the noise. We developed an application for android phones to record the acceleration in real time. These records can be saved on the local phone or transmitted back to a server in real time. The accelerometers in the phones were evaluated by comparing performance with a high quality accelerometer while located on controlled shake tables for a variety of tests. The results show that the accelerometer in the smartphone can reproduce the characteristic of the shaking very well, even the phone left freely on the shake table. The nature of these datasets is also quite different from traditional networks due to the fact that smartphones are moving around with their owners. Therefore, we must distinguish earthquake signals from other daily use. In addition to the shake table tests that accumulated earthquake records, we also recorded different human activities such as running, walking, driving etc. An artificial neural network based approach was developed to distinguish these different records. It shows a 99.7% successful rate of distinguishing earthquakes from the other typical human activities in our database. We are now at the stage ready to develop the basic infrastructure for a smartphone seismic network.

  5. Wideband simulation of earthquake ground motion by a spectrum-matching, multiple-pulse technique

    International Nuclear Information System (INIS)

    Gusev, A.; Pavlov, V.

    2006-04-01

    To simulate earthquake ground motion, we combine a multiple-point stochastic earthquake fault model and a suite of Green functions. Conceptually, our source model generalizes the classic one of Haskell (1966). At any time instant, slip occurs over a narrow strip that sweeps the fault area at a (spatially variable) velocity. This behavior defines seismic signals at lower frequencies (LF), and describes directivity effects. High-frequency (HF) behavior of source signal is defined by local slip history, assumed to be a short segment of pulsed noise. For calculations, this model is discretized as a grid of point subsources. Subsource moment rate time histories, in their LF part, are smooth pulses whose duration equals to the rise time. In their HF part, they are segments of non-Gaussian noise of similar duration. The spectral content of subsource time histories is adjusted so that the summary far-field signal follows certain predetermined spectral scaling law. The results of simulation depend on random seeds, and on particular values of such parameters as: stress drop; average and dispersion parameter for rupture velocity; rupture nucleation point; slip zone width/rise time, wavenumber-spectrum parameter defining final slip function; the degrees of non-Gaussianity for random slip rate in time, and for random final slip in space, and more. To calculate ground motion at a site, Green functions are calculated for each subsource-site pair, then convolved with subsource time functions and at last summed over subsources. The original Green function calculator for layered weakly inelastic medium is of discrete wavenumber kind, with no intrinsic limitations with respect to layer thickness or bandwidth. The simulation package can generate example motions, or used to study uncertainties of the predicted motion. As a test, realistic analogues of recorded motions in the epicentral zone of the 1994 Northridge, California earthquake were synthesized, and related uncertainties were

  6. Probabilistic neural network algorithm for using radon emanations as an earthquake precursor

    International Nuclear Information System (INIS)

    Gupta, Dhawal; Shahani, D.T.

    2014-01-01

    The investigation throughout the world in past two decades provides evidence which indicate that significance variation of radon and other soil gases occur in association with major geophysical events such as earthquake. The traditional statistical algorithm includes regression to remove the effect of the meteorological parameters from the raw radon and anomalies are calculated either taking the periodicity in seasonal variations or periodicity computed using Fast Fourier Transform. In case of neural networks the regression step is avoided. A neural network model can be found which can learn the behavior of radon with respect to meteorological parameter in order that changing emission patterns may be adapted to by the model on its own. The output of this neural model is the estimated radon values. This estimated radon value is used to decide whether anomalous behavior of radon has occurred and a valid precursor may be identified. The neural network model developed using Radial Basis function network gave a prediction rate of 87.7%. The same was accompanied by huge false alarms. The present paper deals with improved neural network algorithm using Probabilistic Neural Networks that requires neither an explicit step of regression nor use of any specific period. This neural network model reduces the false alarms to zero and gave same prediction rate as RBF networks. (author)

  7. Seismogeodesy for rapid earthquake and tsunami characterization

    Science.gov (United States)

    Bock, Y.

    2016-12-01

    Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of

  8. ASSESSING URBAN STREETS NETWORK VULNERABILITY AGAINST EARTHQUAKE USING GIS – CASE STUDY: 6TH ZONE OF TEHRAN

    OpenAIRE

    A. Rastegar

    2017-01-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population ...

  9. Bitcoin network simulator data explotation

    OpenAIRE

    Berini Sarrias, Martí

    2015-01-01

    This project starts with a brief introduction to the concepts of Bitcoin and blockchain, followed by the description of the di erent known attacks to the Bitcoin network. Once reached this point, the basic structure of the Bitcoin network simulator is presented. The main objective of this project is to help in the security assessment of the Bitcoin network. To accomplish that, we try to identify useful metrics, explain them and implement them in the corresponding simulator modules, aiming to ...

  10. Real Time Earthquake Information System in Japan

    Science.gov (United States)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  11. Investigation of Ionospheric Anomalies related to moderate Romanian earthquakes occurred during last decade using VLF/LF INFREP and GNSS Global Networks

    Science.gov (United States)

    Moldovan, Iren-Adelina; Oikonomou, Christina; Haralambous, Haris; Nastase, Eduard; Emilian Toader, Victorin; Biagi, Pier Francesco; Colella, Roberto; Toma-Danila, Dragos

    2017-04-01

    Ionospheric TEC (Total Electron Content) variations and Low Frequency (LF) signal amplitude data prior to five moderate earthquakes (Mw≥5) occurred in Romania, in Vrancea crustal and subcrustal seismic zones, during the last decade were analyzed using observations from the Global Navigation Satellite System (GNSS) and the European INFREP (International Network for Frontier Research on Earthquake Precursors) networks respectively, aiming to detect potential ionospheric anomalies related to these events and describe their characteristics. For this, spectral analysis on TEC data and terminator time method on VLF/LF data were applied. It was found that TEC perturbations appeared few days (1-7) up to few hours before the events lasting around 2-3 hours, with periods 20 and 3-5 minutes which could be associated with the impending earthquakes. In addition, in all three events the sunrise terminator times were delayed approximately 20-40 min few days prior and during the earthquake day. Acknowledgments This work was partially supported by the Partnership in Priority Areas Program - PNII, under MEN-UEFISCDI, DARING Project no. 69/2014 and the Nucleu Program - PN 16-35, Project no. 03 01

  12. STUDY ON SUPPORTING FOR DRAWING UP THE BCP FOR URBAN EXPRESSWAY NETWORK USING BY TRAFFIC SIMULATION SYSTEM

    Science.gov (United States)

    Yamawaki, Masashi; Shiraki, Wataru; Inomo, Hitoshi; Yasuda, Keiichi

    The urban expressway network is an important infrastructure to execute a disaster restoration. Therefore, it is necessary to draw up the BCP (Business Continuity Plan) to enable securing of road user's safety and restoration of facilities, etc. It is important that each urban expressway manager execute decision and improvement of effective BCP countermeasures when disaster occurs by assuming various disaster situations. Then, in this study, we develop the traffic simulation system that can reproduce various disaster situations and traffic actions, and examine some methods supporting for drawing up the BCP for an urban expressway network. For disaster outside assumption such as tsunami generated by a huge earthquake, we examine some approaches securing safety of users and cars on the Hanshin Expressway Network as well as on general roads. And, we aim to propose a tsunami countermeasure not considered in the current urban expressway BCP.

  13. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    Directory of Open Access Journals (Sweden)

    A. Muhammad

    2017-12-01

    Full Text Available This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0 that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan – including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal–vertical evacuation time maps – has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  14. GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK

    OpenAIRE

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-01-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the d...

  15. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    Science.gov (United States)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and

  16. Smartphone-Based Earthquake and Tsunami Early Warning in Chile

    Science.gov (United States)

    Brooks, B. A.; Baez, J. C.; Ericksen, T.; Barrientos, S. E.; Minson, S. E.; Duncan, C.; Guillemot, C.; Smith, D.; Boese, M.; Cochran, E. S.; Murray, J. R.; Langbein, J. O.; Glennie, C. L.; Dueitt, J.; Parra, H.

    2016-12-01

    Many locations around the world face high seismic hazard, but do not have the resources required to establish traditional earthquake and tsunami warning systems (E/TEW) that utilize scientific grade seismological sensors. MEMs accelerometers and GPS chips embedded in, or added inexpensively to, smartphones are sensitive enough to provide robust E/TEW if they are deployed in sufficient numbers. We report on a pilot project in Chile, one of the most productive earthquake regions world-wide. There, magnitude 7.5+ earthquakes occurring roughly every 1.5 years and larger tsunamigenic events pose significant local and trans-Pacific hazard. The smartphone-based network described here is being deployed in parallel to the build-out of a scientific-grade network for E/TEW. Our sensor package comprises a smartphone with internal MEMS and an external GPS chipset that provides satellite-based augmented positioning and phase-smoothing. Each station is independent of local infrastructure, they are solar-powered and rely on cellular SIM cards for communications. An Android app performs initial onboard processing and transmits both accelerometer and GPS data to a server employing the FinDer-BEFORES algorithm to detect earthquakes, producing an acceleration-based line source model for smaller magnitude earthquakes or a joint seismic-geodetic finite-fault distributed slip model for sufficiently large magnitude earthquakes. Either source model provides accurate ground shaking forecasts, while distributed slip models for larger offshore earthquakes can be used to infer seafloor deformation for local tsunami warning. The network will comprise 50 stations by Sept. 2016 and 100 stations by Dec. 2016. Since Nov. 2015, batch processing has detected, located, and estimated the magnitude for Mw>5 earthquakes. Operational since June, 2016, we have successfully detected two earthquakes > M5 (M5.5, M5.1) that occurred within 100km of our network while producing zero false alarms.

  17. Twitter earthquake detection: Earthquake monitoring in a social world

    Science.gov (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  18. Flood Simulation Using WMS Model in Small Watershed after Strong Earthquake -A Case Study of Longxihe Watershed, Sichuan province, China

    Science.gov (United States)

    Guo, B.

    2017-12-01

    Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

  19. Simulating Earthquakes for Science and Society: Earthquake Visualizations Ideal for use in Science Communication and Education

    Science.gov (United States)

    de Groot, R.

    2008-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently gained visibility via television news coverage in Southern California. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin & Brick, 2002). For example, The Great Southern California ShakeOut was based on a potential magnitude 7.8 earthquake on the southern San Andreas fault. The visualization created for the ShakeOut was a key scientific and communication tool for the earthquake drill. This presentation will also feature SCEC Virtual Display of Objects visualization software developed by SCEC Undergraduate Studies in Earthquake Information Technology interns. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  20. Numerical simulation of faulting in the Sunda Trench shows that seamounts may generate megathrust earthquakes

    Science.gov (United States)

    Jiao, L.; Chan, C. H.; Tapponnier, P.

    2017-12-01

    The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.

  1. Earthquake-induced landslide-susceptibility mapping using an artificial neural network

    Directory of Open Access Journals (Sweden)

    S. Lee

    2006-01-01

    Full Text Available The purpose of this study was to apply and verify landslide-susceptibility analysis techniques using an artificial neural network and a Geographic Information System (GIS applied to Baguio City, Philippines. The 16 July 1990 earthquake-induced landslides were studied. Landslide locations were identified from interpretation of aerial photographs and field survey, and a spatial database was constructed from topographic maps, geology, land cover and terrain mapping units. Factors that influence landslide occurrence, such as slope, aspect, curvature and distance from drainage were calculated from the topographic database. Lithology and distance from faults were derived from the geology database. Land cover was identified from the topographic database. Terrain map units were interpreted from aerial photographs. These factors were used with an artificial neural network to analyze landslide susceptibility. Each factor weight was determined by a back-propagation exercise. Landslide-susceptibility indices were calculated using the back-propagation weights, and susceptibility maps were constructed from GIS data. The susceptibility map was compared with known landslide locations and verified. The demonstrated prediction accuracy was 93.20%.

  2. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    Science.gov (United States)

    Yu, E.; Bhaskaran, A.; Chen, S.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2010-12-01

    Currently the SCEDC archives continuous and triggered data from nearly 5000 data channels from 425 SCSN recorded stations, processing and archiving an average of 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC in the past year. Updated hardware: ● The SCEDC has more than doubled its waveform file storage capacity by migrating to 2 TB disks. New data holdings: ● Waveform data: Beginning Jan 1, 2010 the SCEDC began continuously archiving all high-sample-rate strong-motion channels. All seismic channels recorded by SCSN are now continuously archived and available at SCEDC. ● Portable data from El Mayor Cucapah 7.2 sequence: Seismic waveforms from portable stations installed by researchers (contributed by Elizabeth Cochran, Jamie Steidl, and Octavio Lazaro-Mancilla) have been added to the archive and are accessible through STP either as continuous data or associated with events in the SCEDC earthquake catalog. This additional data will help SCSN analysts and researchers improve event locations from the sequence. ● Real time GPS solutions from El Mayor Cucapah 7.2 event: Three component 1Hz seismograms of California Real Time Network (CRTN) GPS stations, from the April 4, 2010, magnitude 7.2 El Mayor-Cucapah earthquake are available in SAC format at the SCEDC. These time series were created by Brendan Crowell, Yehuda Bock, the project PI, and Mindy Squibb at SOPAC using data from the CRTN. The El Mayor-Cucapah earthquake demonstrated definitively the power of real-time high-rate GPS data: they measure dynamic displacements directly, they do not clip and they are also able to detect the permanent (coseismic) surface deformation. ● Triggered data from the Quake Catcher Network (QCN) and Community Seismic Network (CSN): The SCEDC in

  3. Laboratory generated M -6 earthquakes

    Science.gov (United States)

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  4. Numerical simulation of multiple-physical fields coupling for thermal anomalies before earthquakes: A case study of the 2008 Wenchuan Ms8.0 earthquake in southwest China

    Science.gov (United States)

    Deng, Z.

    2017-12-01

    It has become a highly focused issue that thermal anomalies appear before major earthquakes. There are various hypotheses about the mechanism of thermal anomalies. Because of lacking of enough evidences, the mechanism is still require to be further researched. Gestation and occurrence of a major earthquake is related with the interaction of multi-physical fields. The underground fluid surging out the surface is very likely to be the reason for the thermal anomaly. This study tries to answer some question, such as how the geothermal energy transfer to the surface, and how the multiple-physical fields interacted. The 2008 Wenchuan Ms8.0 earthquake, is one of the largest evens in the last decade in China mainland. Remote sensing studies indicate that distinguishable thermal anomalies occurred several days before the earthquake. The heat anomaly value is more than 3 times the average in normal time and distributes along the Longmen Shan fault zone. Based on geological and geophysical data, 2D dynamic model of coupled stress, seepage and thermal fields (HTM model) is constructed. Then using the COMSOL multi-physics filed software, this work tries to reveal the generation process and distribution patterns of thermal anomalies prior to thrust-type major earthquakes. The simulation get the results: (1)Before the micro rupture, with the increase of compression, the heat current flows to the fault in the footwall on the whole, while in the hanging wall of the fault, particularly near the ground surface, the heat flow upward. In the fault zone, heat flow upward along the fracture surface, heat flux in the fracture zone is slightly larger than the wall rock;, but the value is all very small. (2)After the occurrence of the micro fracture, the heat flow rapidly collects to the faults. In the fault zones, the heat flow accelerates up along the fracture surfaces, the heat flux increases suddenly, and the vertical heat flux reaches to the maximum. The heat flux in the 3 fracture

  5. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  6. Study on tsunami due to offshore earthquakes for Korea coast. Literature survey and numerical simulation on earthquake and tsunami in the Japan Sea and the East China Sea

    International Nuclear Information System (INIS)

    Matsuyama, Masafumi; Aoyagi, Yasuhira; Inoue, Daiei; Choi, Weon-Hack; Kang, Keum-Seok

    2008-01-01

    In Korea, there has been a concern on tsumami risks for the Nuclear Power Plants since the 1983 Nihonkai-Chubu earthquake tsunami. The maximum run-up height reached 4 m to north of the Ulchin nuclear power plant site. The east coast of Korea was also attacked by a few meters high tsunami generated by the 1993 Hokkaido Nansei-Oki earthquake. Both source areas of them were in the areas western off Hokkaido to the eastern margin of the Japan Sea, which remains another tsunami potential. Therefore it is necessary to study tsunami risks for coast of Korea by means of geological investigation and numerical simulation. Historical records of earthquake and tsunami in the Japan Sea were re-compiled to evaluate tsunami potential. A database of marine active faults in the Japan Sea was compiled to decide a regional potential of tsunami. Many developed reverse faults are found in the areas western off Hokkaido to the eastern margin of the Japan Sea. The authors have found no historical earthquake in the East China Sea which caused tunami observed at coast of Korea. Therefore five fault models were determined on the basis of the analysis results of historical records and recent research results of fault parameter and tunami. Tsunami heights were estimated by numerical simulation of nonlinear dispersion wave theory. The results of the simulations indicate that the tsunami heights in these cases are less than 0.25 m along the coast of Korea, and the tsunami risk by these assumed faults does not lead to severe impact. It is concluded that tsunami occurred in the areas western off Hokkaido to the eastern margin of the Japan Sea leads the most significant impact to Korea consequently. (author)

  7. Parallelization of the Coupled Earthquake Model

    Science.gov (United States)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  8. Three-Dimensional Finite Difference Simulation of Ground Motions from the August 24, 2014 South Napa Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Arthur J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of California, Berkeley, CA (United States); Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Pitarka, Arben [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-15

    We performed three-dimensional (3D) anelastic ground motion simulations of the South Napa earthquake to investigate the performance of different finite rupture models and the effects of 3D structure on the observed wavefield. We considered rupture models reported by Dreger et al. (2015), Ji et al., (2015), Wei et al. (2015) and Melgar et al. (2015). We used the SW4 anelastic finite difference code developed at Lawrence Livermore National Laboratory (Petersson and Sjogreen, 2013) and distributed by the Computational Infrastructure for Geodynamics. This code can compute the seismic response for fully 3D sub-surface models, including surface topography and linear anelasticity. We use the 3D geologic/seismic model of the San Francisco Bay Area developed by the United States Geological Survey (Aagaard et al., 2008, 2010). Evaluation of earlier versions of this model indicated that the structure can reproduce main features of observed waveforms from moderate earthquakes (Rodgers et al., 2008; Kim et al., 2010). Simulations were performed for a domain covering local distances (< 25 km) and resolution providing simulated ground motions valid to 1 Hz.

  9. Fast 3D seismic wave simulations of 24 August 2016 Mw 6.0 central Italy earthquake for visual communication

    Directory of Open Access Journals (Sweden)

    Emanuele Casarotti

    2016-12-01

    Full Text Available We present here the first application of the fast reacting framework for 3D simulations of seismic wave propagation generated by earthquakes in the Italian region with magnitude Mw 5. The driven motivation is to offer a visualization of the natural phenomenon to the general public but also to provide preliminary modeling to expert and civil protection operators. We report here a description of this framework during the emergency of 24 August 2016 Mw 6.0 central Italy Earthquake, a discussion on the accuracy of the simulation for this seismic event and a preliminary critical analysis of the visualization structure and of the reaction of the public.

  10. The SCEC/USGS dynamic earthquake rupture code verification exercise

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous

  11. On the plant operators performance during earthquake

    International Nuclear Information System (INIS)

    Kitada, Y.; Yoshimura, S.; Abe, M.; Niwa, H.; Yoneda, T.; Matsunaga, M.; Suzuki, T.

    1994-01-01

    There is little data on which to judge the performance of plant operators during and after strong earthquakes. In order to obtain such data to enhance the reliability on the plant operation, a Japanese utility and a power plant manufacturer carried out a vibration test using a shaking table. The purpose of the test was to investigate operator performance, i.e., the quickness and correctness in switch handling and panel meter read-out. The movement of chairs during earthquake as also of interest, because if the chairs moved significantly or turned over during a strong earthquake, some arresting mechanism would be required for the chair. Although there were differences between the simulated earthquake motions used and actual earthquakes mainly due to the specifications of the shaking table, the earthquake motions had almost no influence on the operators of their capability (performance) for operating the simulated console and the personal computers

  12. Detecting earthquakes over a seismic network using single-station similarity measures

    Science.gov (United States)

    Bergen, Karianne J.; Beroza, Gregory C.

    2018-06-01

    New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected moveout. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to 2 weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalogue (including 95 per cent of the catalogue events), and less than 1 per cent of these candidate events are false detections.

  13. Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity

    Science.gov (United States)

    Allison, Kali L.; Dunham, Eric M.

    2018-05-01

    We simulate earthquake cycles with rate-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain rates are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.

  14. How geometry and structure control the seismic radiation : spectral element simulation of the dynamic rupture of the Mw 9.0 Tohoku earthquake

    Science.gov (United States)

    Festa, G.; Vilotte, J.; Scala, A.

    2012-12-01

    The M 9.0, 2011 Tohoku earthquake, along the North American-Pacific plate boundary, East of the Honshu Island, yielded a complex broadband rupture extending southwards over 600 km along strike and triggering a large tsunami that ravaged the East coast of North Japan. Strong motion and high-rate continuous GPS data, recorded all along the Japanese archipelago by the national seismic networks K-Net and Kik-net and geodetic network Geonet, together with teleseismic data, indicated a complex frequency dependent rupture. Low frequency signals (fmeters), extending along-dip over about 100 km, between the hypocenter and the trench, and 150 to 200 km along strike. This slip asperity was likely the cause of the localized tsunami source and of the large amplitude tsunami waves. High-frequency signals (f>0.5 Hz) were instead generated close to the coast in the deeper part of the subduction zone, by at least four smaller size asperities, with possible repeated slip, and were mostly the cause for the ground shaking felt in the Eastern part of Japan. The deep origin of the high-frequency radiation was also confirmed by teleseismic high frequency back projection analysis. Intermediate frequency analysis showed a transition between the shallow and deeper part of the fault, with the rupture almost confined in a small stripe containing the hypocenter before propagating southward along the strike, indicating a predominant in-plane rupture mechanism in the initial stage of the rupture itself. We numerically investigate the role of the geometry of the subduction interface and of the structural properties of the subduction zone on the broadband dynamic rupture and radiation of the Tohoku earthquake. Based upon the almost in-plane behavior of the rupture in its initial stage, 2D non-smooth spectral element dynamic simulations of the earthquake rupture propagation are performed including the non planar and kink geometry of the subduction interface, together with bi-material interfaces

  15. Coherence of Mach fronts during heterogeneous supershear earthquake rupture propagation: Simulations and comparison with observations

    Science.gov (United States)

    Bizzarri, A.; Dunham, Eric M.; Spudich, P.

    2010-01-01

    We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation

  16. Feasibility study of earthquake early warning (EEW) in Hawaii

    Science.gov (United States)

    Thelen, Weston A.; Hotovec-Ellis, Alicia J.; Bodin, Paul

    2016-09-30

    The effects of earthquake shaking on the population and infrastructure across the State of Hawaii could be catastrophic, and the high seismic hazard in the region emphasizes the likelihood of such an event. Earthquake early warning (EEW) has the potential to give several seconds of warning before strong shaking starts, and thus reduce loss of life and damage to property. The two approaches to EEW are (1) a network approach (such as ShakeAlert or ElarmS) where the regional seismic network is used to detect the earthquake and distribute the alarm and (2) a local approach where a critical facility has a single seismometer (or small array) and a warning system on the premises.The network approach, also referred to here as ShakeAlert or ElarmS, uses the closest stations within a regional seismic network to detect and characterize an earthquake. Most parameters used for a network approach require observations on multiple stations (typically 3 or 4), which slows down the alarm time slightly, but the alarms are generally more reliable than with single-station EEW approaches. The network approach also benefits from having stations closer to the source of any potentially damaging earthquake, so that alarms can be sent ahead to anyone who subscribes to receive the notification. Thus, a fully implemented ShakeAlert system can provide seconds of warning for both critical facilities and general populations ahead of damaging earthquake shaking.The cost to implement and maintain a fully operational ShakeAlert system is high compared to a local approach or single-station solution, but the benefits of a ShakeAlert system would be felt statewide—the warning times for strong shaking are potentially longer for most sources at most locations.The local approach, referred to herein as “single station,” uses measurements from a single seismometer to assess whether strong earthquake shaking can be expected. Because of the reliance on a single station, false alarms are more common than

  17. Foreshocks and aftershocks locations of the 2014 Pisagua, N. Chile earthquake: history of a megathrust earthquake nucleation

    Science.gov (United States)

    Fuenzalida Velasco, Amaya; Rietbrock, Andreas; Tavera, Hernando; Ryder, Isabelle; Ruiz, Sergio; Thomas, Reece; De Angelis, Silvio; Bondoux, Francis

    2015-04-01

    The April 2014 Mw 8.1 Pisagua earthquake occurred in the Northern Chile seismic gap: a region of the South American subduction zone lying between Arica city and the Mejillones Peninsula. It is believed that this part of the subduction zone has not experienced a large earthquake since 1877. Thanks to the identification of this seismic gap, the north of Chile was well instrumented before the Pisagua earthquake, including the Integrated Plate boundary Observatory Chile (IPOC) network and the Chilean local network installed by the Centro Sismologico Nacional (CSN). These instruments were able to record the full foreshock and aftershock sequences, allowing a unique opportunity to study the nucleation process of large megathrust earthquakes. To improve azimuthal coverage of the Pisagua seismic sequence, after the earthquake, in collaboration with the Instituto Geofisico del Peru (IGP) we installed a temporary seismic network in south of Peru. The network comprised 12 short-period stations located in the coastal area between Moquegua and Tacna and they were operative from 1st May 2014. We also installed three stations on the slopes of the Ticsiani volcano to monitor any possible change in volcanic activity following the Pisagua earthquake. In this work we analysed the continuous seismic data recorded by CSN and IPOC networks from 1 March to 30 June to obtain the catalogue of the sequence, including foreshocks and aftershocks. Using an automatic algorithm based in STA/LTA we obtained the picks for P and S waves. Association in time and space defined the events and computed an initial location using Hypo71 and the 1D local velocity model. More than 11,000 events were identified with this method for the whole period, but we selected the best resolved events that include more than 7 observed arrivals with at least 2 S picks of them, to relocate these events using NonLinLoc software. For the main events of the sequence we carefully estimate event locations and we obtained

  18. Twitter earthquake detection: earthquake monitoring in a social world

    Directory of Open Access Journals (Sweden)

    Daniel C. Bowden

    2011-06-01

    Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  19. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    Science.gov (United States)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica

  20. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)

    1997-03-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  1. Determination of Design Basis Earthquake ground motion

    International Nuclear Information System (INIS)

    Kato, Muneaki

    1997-01-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  2. The MeSO-net (Metropolitan Seismic Observation network) confronts the Pacific Coast of Tohoku Earthquake, Japan (Mw 9.0)

    Science.gov (United States)

    Kasahara, K.; Nakagawa, S.; Sakai, S.; Nanjo, K.; Panayotopoulos, Y.; Morita, Y.; Tsuruoka, H.; Kurashimo, E.; Obara, K.; Hirata, N.; Aketagawa, T.; Kimura, H.

    2011-12-01

    On April 2007, we have launched the special project for earthquake disaster mitigation in the Tokyo Metropolitan area (Fiscal 2007-2011). As a part of this project, construction of the MeSO-net (Metropolitan Seismic Observation network) has been completed, with about 300 stations deployed at mainly elementary and junior-high schools with an interval of about 5 km in space. This results in a highly dense network that covers the metropolitan area. To achieve stable seismic observation with lower surface ground noise, relative to a measurement on the surface, sensors of all stations were installed in boreholes at a depth of about 20m. The sensors have a wide dynamic range (135dB) and a wide frequency band (DC to 80Hz). Data are digitized with 200Hz sampling and telemetered to the Earthquake Research Institute, University of Tokyo. The MeSO-net that can detect and locate most earthquakes with magnitudes above 2.5 provides a unique baseline in scientific and engineering researches on the Tokyo metropolitan area, as follows. One of the main contributions is to greatly improve the image of the Philippine Sea plate (PSP) (Nakagawa et al., 2010) and provides an accurate estimation of the plate boundaries between the PSP and the Pacific plate, allowing us to possibly discuss clear understanding of the relation between the PSP deformation and M7+ intra-slab earthquake generation. Also, the latest version of the plate model in the metropolitan area, proposed by our project, attracts various researchers, comparing with highly-accurate solutions of fault mechanism, repeating earthquakes, etc. Moreover, long-periods ground motions generated by the 2011 earthquake off the Pacific coast of Tohoku earthquake (Mw 9.0) were observed by the MeSO-net and analyzed to obtain the Array Back-Projection Imaging of this event (Honda et al., 2011). As a result, the overall pattern of the imaged asperities coincides well with the slip distribution determined based on other waveform inversion

  3. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    International Nuclear Information System (INIS)

    AllamehZadeh, Mostafa

    2011-01-01

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0–6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  4. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com [International Institute of Earthquake Engineering and Seismology (Iran, Islamic Republic of)

    2011-12-15

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  5. Ground motion modeling of Hayward fault scenario earthquakes II:Simulation of long-period and broadband ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C

    2009-11-04

    We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.

  6. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  7. The ShakeOut earthquake source and ground motion simulations

    Science.gov (United States)

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  8. A suite of exercises for verifying dynamic earthquake rupture codes

    Science.gov (United States)

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  9. Assessing Urban Streets Network Vulnerability against Earthquake Using GIS - Case Study: 6TH Zone of Tehran

    Science.gov (United States)

    Rastegar, A.

    2017-09-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population) and 20 km2 area (3.2% of city area), is one of the main municipal zones of Tehran (Iran center of statistics, 2006). Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  10. Co-Seismic Effect of the 2011 Japan Earthquake on the Crustal Movement Observation Network of China

    Directory of Open Access Journals (Sweden)

    Shaomin Yang

    2013-01-01

    Full Text Available Great earthquakes introduce measurable co-seismic displacements over regions of hundreds and thousands of kilometers in width, which, if not accounted for, may significantly bias the long-term surface velocity field constrained by GPS observations performed during a period encompassing that event. Here, we first present an estimation of the far-field co-seismic off-sets associated with the 2011 Japan Mw 9.0 earthquake using GPS measurements from the Crustal Movement Observation Network of China (CMONOC in North China. The uncertainties of co-seismic off-set, either at cGPS stations or at campaign sites, are better than 5 - 6 mm on average. We compare three methods to constrain the co-seismic off-sets at the campaign sites in northeastern China 1 interpolating cGPS coseismic offsets, 2 estimating in terms of sparsely sampled time-series, and 3 predicting by using a well-constrained slip model. We show that the interpolation of cGPS co-seismic off-sets onto the campaign sites yield the best co-seismic off-set solution for these sites. The source model gives a consistent prediction based on finite dislocation in a layered spherical Earth, which agrees with the best prediction with discrepancies of 2 - 10 mm for 32 campaign sites. Thus, the co-seismic off-set model prediction is still a reasonable choice if a good coverage cGPS network is not available for a very active region like the Tibetan Plateau in which numerous campaign GPS sites were displaced by the recent large earthquakes.

  11. Estimation of failure probability on real structure utilized by earthquake observation data

    International Nuclear Information System (INIS)

    Matsubara, Masayoshi

    1995-01-01

    The objective of this report is to propose the procedure which estimates the structural response on a real structure by utilizing earthquake observation data using Neural network system. We apply the neural network system to estimate the ground motion of the site by enormous earthquake data published from Japan Meteorological Agency. The proposed procedure has some possibility to estimate the correlation between earthquake and response adequately. (author)

  12. Stochastic simulation of karst conduit networks

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José

    2012-01-01

    Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when

  13. Seismicity in the block mountains between Halle and Leipzig, Central Germany: centroid moment tensors, ground motion simulation, and felt intensities of two M ≈ 3 earthquakes in 2015 and 2017

    Science.gov (United States)

    Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice

    2018-05-01

    On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.

  14. BioNSi: A Discrete Biological Network Simulator Tool.

    Science.gov (United States)

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  15. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  16. The ordered network structure of M {>=} 6 strong earthquakes and its prediction in the Jiangsu-South Yellow Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Men, Ke-Pei [Nanjing Univ. of Information Science and Technology (China). College of Mathematics and Statistics; Cui, Lei [California Univ., Santa Barbara, CA (United States). Applied Probability and Statistics Dept.

    2013-05-15

    The the Jiangsu-South Yellow Sea region is one of the key seismic monitoring defence areas in the eastern part of China. Since 1846, M {>=} 6 strong earthquakes have showed an obvious commensurability and orderliness in this region. The main orderly values are 74 {proportional_to} 75 a, 57 {proportional_to} 58 a, 11 {proportional_to} 12 a, and 5 {proportional_to} 6 a, wherein 74 {proportional_to} 75 a and 57 {proportional_to} 58 a with an outstanding predictive role. According to the information prediction theory of Wen-Bo Weng, we conceived the M {>=} 6 strong earthquake ordered network structure in the South Yellow Sea and the whole region. Based on this, we analyzed and discussed the variation of seismicity in detail and also made a trend prediction of M {>=} 6 strong earthquakes in the future. The results showed that since 1998 it has entered into a new quiet episode which may continue until about 2042; and the first M {>=} 6 strong earthquake in the next active episode will probably occur in 2053 pre and post, with the location likely in the sea area of the South Yellow Sea; also, the second and the third ones or strong earthquake swarm in the future will probably occur in 2058 and 2070 pre and post. (orig.)

  17. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  18. GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network

    Science.gov (United States)

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-09-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.

  19. GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK

    Directory of Open Access Journals (Sweden)

    M. Raeesi

    2013-09-01

    Full Text Available Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post–earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post–earthquake crisis.

  20. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  1. Simulation of the earthquake-induced collapse of a school building in Turkey in 2011 Van Earthquake

    NARCIS (Netherlands)

    Bal, Ihsan Engin; Smyrou, Eleni

    2016-01-01

    Collapses of school or dormitory buildings experienced in recent earthquakes raise the issue of safety as a major challenge for decision makers. A school building is ‘just another structure’ technically speaking, however, the consequences of a collapse in an earthquake could lead to social reactions

  2. Experimental study of structural response to earthquakes

    International Nuclear Information System (INIS)

    Clough, R.W.; Bertero, V.V.; Bouwkamp, J.G.; Popov, E.P.

    1975-01-01

    The objectives, methods, and some of the principal results obtained from experimental studies of the behavior of structures subjected to earthquakes are described. Although such investigations are being conducted in many laboratories throughout the world, the information presented deals specifically with projects being carried out at the Earthquake Engineering Research Center (EERC) of the University of California, Berkeley. A primary purpose of these investigations is to obtain detailed information on the inelastic response mechanisms in typical structural systems so that the experimentally observed performance can be compared with computer generated analytical predictions. Only by such comparisons can the mathematical models used in dynamic nonlinear analyses be verified and improved. Two experimental procedures for investigating earthquake structural response are discussed: the earthquake simulator facility which subjects the base of the test structure to acceleration histories similar to those recorded in actual earthquakes, and systems of hydraulic rams which impose specified displacement histories on the test components, equivalent to motions developed in structures subjected to actual'quakes. The general concept and performance of the 20ft square EERC earthquake simulator is described, and the testing of a two story concrete frame building is outlined. Correlation of the experimental results with analytical predictions demonstrates that satisfactory agreement can be obtained only if the mathematical model incorporates a stiffness deterioration mechanism which simulates the cracking and other damage suffered by the structure

  3. Programmable multi-node quantum network design and simulation

    Science.gov (United States)

    Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.

    2016-05-01

    Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.

  4. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    Science.gov (United States)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two

  5. Splitting Strategy for Simulating Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Xiong You

    2014-01-01

    Full Text Available The splitting approach is developed for the numerical simulation of genetic regulatory networks with a stable steady-state structure. The numerical results of the simulation of a one-gene network, a two-gene network, and a p53-mdm2 network show that the new splitting methods constructed in this paper are remarkably more effective and more suitable for long-term computation with large steps than the traditional general-purpose Runge-Kutta methods. The new methods have no restriction on the choice of stepsize due to their infinitely large stability regions.

  6. EARTHQUAKE TRIGGERING AND SPATIAL-TEMPORAL RELATIONS IN THE VICINITY OF YUCCA MOUNTAIN, NEVADA

    Energy Technology Data Exchange (ETDEWEB)

    na

    2001-02-08

    It is well accepted that the 1992 M 5.6 Little Skull Mountain earthquake, the largest historical event to have occurred within 25 km of Yucca Mountain, Nevada, was triggered by the M 7.2 Landers earthquake that occurred the day before. On the premise that earthquakes can be triggered by applied stresses, we have examined the earthquake catalog from the Southern Great Basin Digital Seismic Network (SGBDSN) for other evidence of triggering by external and internal stresses. This catalog now comprises over 12,000 events, encompassing five years of consistent monitoring, and has a low threshold of completeness, varying from M 0 in the center of the network to M 1 at the fringes. We examined the SGBDSN catalog response to external stresses such as large signals propagating from teleseismic and regional earthquakes, microseismic storms, and earth tides. Results are generally negative. We also examined the interplay of earthquakes within the SGBDSN. The number of ''foreshocks'', as judged by most criteria, is significantly higher than the background seismicity rate. In order to establish this, we first removed aftershocks from the catalog with widely used methodology. The existence of SGBDSN foreshocks is supported by comparing actual statistics to those of a simulated catalog with uniform-distributed locations and Poisson-distributed times of occurrence. The probabilities of a given SGBDSN earthquake being followed by one having a higher magnitude within a short time frame and within a close distance are at least as high as those found with regional catalogs. These catalogs have completeness thresholds two to three units higher in magnitude than the SGBDSN catalog used here. The largest earthquake in the SGBDSN catalog, the M 4.7 event in Frenchman Flat on 01/27/1999, was preceded by a definite foreshock sequence. The largest event within 75 km of Yucca Mountain in historical time, the M 5.7 Scotty's Junction event of 08/01/1999, was also

  7. EARTHQUAKE TRIGGERING AND SPATIAL-TEMPORAL RELATIONS IN THE VICINITY OF YUCCA MOUNTAIN, NEVADA

    International Nuclear Information System (INIS)

    2001-01-01

    It is well accepted that the 1992 M 5.6 Little Skull Mountain earthquake, the largest historical event to have occurred within 25 km of Yucca Mountain, Nevada, was triggered by the M 7.2 Landers earthquake that occurred the day before. On the premise that earthquakes can be triggered by applied stresses, we have examined the earthquake catalog from the Southern Great Basin Digital Seismic Network (SGBDSN) for other evidence of triggering by external and internal stresses. This catalog now comprises over 12,000 events, encompassing five years of consistent monitoring, and has a low threshold of completeness, varying from M 0 in the center of the network to M 1 at the fringes. We examined the SGBDSN catalog response to external stresses such as large signals propagating from teleseismic and regional earthquakes, microseismic storms, and earth tides. Results are generally negative. We also examined the interplay of earthquakes within the SGBDSN. The number of ''foreshocks'', as judged by most criteria, is significantly higher than the background seismicity rate. In order to establish this, we first removed aftershocks from the catalog with widely used methodology. The existence of SGBDSN foreshocks is supported by comparing actual statistics to those of a simulated catalog with uniform-distributed locations and Poisson-distributed times of occurrence. The probabilities of a given SGBDSN earthquake being followed by one having a higher magnitude within a short time frame and within a close distance are at least as high as those found with regional catalogs. These catalogs have completeness thresholds two to three units higher in magnitude than the SGBDSN catalog used here. The largest earthquake in the SGBDSN catalog, the M 4.7 event in Frenchman Flat on 01/27/1999, was preceded by a definite foreshock sequence. The largest event within 75 km of Yucca Mountain in historical time, the M 5.7 Scotty's Junction event of 08/01/1999, was also preceded by foreshocks. The

  8. Earthquake locations determined by the Southern Alaska seismograph network for October 1971 through May 1989

    Science.gov (United States)

    Fogleman, Kent A.; Lahr, John C.; Stephens, Christopher D.; Page, Robert A.

    1993-01-01

    This report describes the instrumentation and evolution of the U.S. Geological Survey’s regional seismograph network in southern Alaska, provides phase and hypocenter data for seismic events from October 1971 through May 1989, reviews the location methods used, and discusses the completeness of the catalog and the accuracy of the computed hypocenters. Included are arrival time data for explosions detonated under the Trans-Alaska Crustal Transect (TACT) in 1984 and 1985.The U.S. Geological Survey (USGS) operated a regional network of seismographs in southern Alaska from 1971 to the mid 1990s. The principal purpose of this network was to record seismic data to be used to precisely locate earthquakes in the seismic zones of southern Alaska, delineate seismically active faults, assess seismic risks, document potential premonitory earthquake phenomena, investigate current tectonic deformation, and study the structure and physical properties of the crust and upper mantle. A task fundamental to all of these goals was the routine cataloging of parameters for earthquakes located within and adjacent to the seismograph network.The initial network of 10 stations, 7 around Cook Inlet and 3 near Valdez, was installed in 1971. In subsequent summers additions or modifications to the network were made. By the fall of 1973, 26 stations extended from western Cook Inlet to eastern Prince William Sound, and 4 stations were located to the east between Cordova and Yakutat. A year later 20 additional stations were installed. Thirteen of these were placed along the eastern Gulf of Alaska with support from the National Oceanic and Atmospheric Administration (NOAA) under the Outer Continental Shelf Environmental Assessment Program to investigate the seismicity of the outer continental shelf, a region of interest for oil exploration. Since then the region covered by the network remained relatively fixed while efforts were made to make the stations more reliable through improved electronic

  9. Calculation of displacements on fractures intersecting canisters induced by earthquakes: Aberg, Beberg and Ceberg examples

    Energy Technology Data Exchange (ETDEWEB)

    LaPointe, P.R.; Cladouhos, T. [Golder Associates Inc. (Sweden); Follin, S. [Golder Grundteknik KB (Sweden)

    1999-01-01

    This study shows how the method developed in La Pointe and others can be applied to assess the safety of canisters due to secondary slippage of fractures intersecting those canisters in the event of an earthquake. The method is applied to the three generic sites Aberg, Beberg and Ceberg. Estimation of secondary slippage or displacement is a four-stage process. The first stage is the analysis of lineament trace data in order to quantify the scaling properties of the fractures. This is necessary to insure that all scales of fracturing are properly represented in the numerical simulations. The second stage consists of creating stochastic discrete fracture network (DFN) models for jointing and small faulting at each of the generic sites. The third stage is to combine the stochastic DFN model with mapped lineament data at larger scales into data sets for the displacement calculations. The final stage is to carry out the displacement calculations for all of the earthquakes that might occur during the next 100,000 years. Large earthquakes are located along any lineaments in the vicinity of the site that are of sufficient size to accommodate an earthquake of the specified magnitude. These lineaments are assumed to represent vertical faults. Smaller earthquakes are located at random. The magnitude of the earthquake that any fault could generate is based upon the mapped surface trace length of the lineaments, and is calculated from regression relations. Recurrence rates for a given magnitude of earthquake are based upon published studies for Sweden. A major assumption in this study is that future earthquakes will be similar in magnitude, location and orientation as earthquakes in the geological and historical records of Sweden. Another important assumption is that the displacement calculations based upon linear elasticity and linear elastic fracture mechanics provides a conservative (over-)estimate of possible displacements. A third assumption is that the world

  10. Simulations of biopolymer networks under shear

    NARCIS (Netherlands)

    Huisman, Elisabeth Margaretha

    2011-01-01

    In this thesis we present a new method to simulate realistic three-dimensional networks of biopolymers under shear. These biopolymer networks are important for the structural functions of cells and tissues. We use the method to analyze these networks under shear, and consider the elastic modulus,

  11. Exploring geological and socio-demographic factors associated with under-five mortality in the Wenchuan earthquake using neural network model.

    Science.gov (United States)

    Hu, Yi; Wang, Jinfeng; Li, Xiaohong; Ren, Dan; Driskell, Luke; Zhu, Jun

    2012-01-01

    On 12 May 2008, a devastating earthquake occurred in Sichuan Province, China, taking tens of thousands of lives and destroying the homes of millions of people. Among the large number of dead or missing were children, particularly children aged less than five years old, a fact which drew significant media attention. To obtain relevant information specifically to aid further studies and future preventative measures, a neural network model was proposed to explore some geological and socio-demographic factors associated with earthquake-related child mortality. Sensitivity analysis showed that topographic slope (mean 35.76%), geomorphology (mean 24.18%), earthquake intensity (mean 13.68%), and average income (mean 11%) had great contributions to child mortality. These findings could provide some clues to researchers for further studies and to policy makers in deciding how and where preventive measures and corresponding policies should be implemented in the reconstruction of communities.

  12. Earthquake responses of a beam supported by a mechanical snubber

    International Nuclear Information System (INIS)

    Ohmata, Kenichiro; Ishizu, Seiji.

    1989-01-01

    The mechanical snubber is an earthquakeproof device for piping systems under particular circumstances such as high temperature and radioactivity. It has nonlinearities in both load and frequency response. In this report, the resisting force characteristics of the snubber and earthquake responses of piping (a simply supported beam) which is supported by the snubber are simulated using Continuous System Simulation Language (CSSL). Digital simulations are carried out for various kinds of physical properties of the snubber. The restraint effect and the maximum resisting force of the snubber during earthquakes are discussed and compared with the case of an oil damper. The earthquake waves used here are E1 Centro N-S and Akita Harbour N-S (Nihonkai-Chubu earthquake). (author)

  13. ASSESSING URBAN STREETS NETWORK VULNERABILITY AGAINST EARTHQUAKE USING GIS – CASE STUDY: 6TH ZONE OF TEHRAN

    Directory of Open Access Journals (Sweden)

    A. Rastegar

    2017-09-01

    Full Text Available Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population and 20 km2 area (3.2% of city area, is one of the main municipal zones of Tehran (Iran center of statistics, 2006. Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  14. Earthquakes and Volcanic Processes at San Miguel Volcano, El Salvador, Determined from a Small, Temporary Seismic Network

    Science.gov (United States)

    Hernandez, S.; Schiek, C. G.; Zeiler, C. P.; Velasco, A. A.; Hurtado, J. M.

    2008-12-01

    The San Miguel volcano lies within the Central American volcanic chain in eastern El Salvador. The volcano has experienced at least 29 eruptions with Volcano Explosivity Index (VEI) of 2. Since 1970, however, eruptions have decreased in intensity to an average of VEI 1, with the most recent eruption occurring in 2002. Eruptions at San Miguel volcano consist mostly of central vent and phreatic eruptions. A critical challenge related to the explosive nature of this volcano is to understand the relationships between precursory surface deformation, earthquake activity, and volcanic activity. In this project, we seek to determine sub-surface structures within and near the volcano, relate the local deformation to these structures, and better understand the hazard that the volcano presents in the region. To accomplish these goals, we deployed a six station, broadband seismic network around San Miguel volcano in collaboration with researchers from Servicio Nacional de Estudios Territoriales (SNET). This network operated continuously from 23 March 2007 to 15 January 2008 and had a high data recovery rate. The data were processed to determine earthquake locations, magnitudes, and, for some of the larger events, focal mechanisms. We obtained high precision locations using a double-difference approach and identified at least 25 events near the volcano. Ongoing analysis will seek to identify earthquake types (e.g., long period, tectonic, and hybrid events) that occurred in the vicinity of San Miguel volcano. These results will be combined with radar interferometric measurements of surface deformation in order to determine the relationship between surface and subsurface processes at the volcano.

  15. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  16. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  17. Earthquake location in island arcs

    Science.gov (United States)

    Engdahl, E.R.; Dewey, J.W.; Fujita, K.

    1982-01-01

    A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high

  18. Rainfall and earthquake-induced landslide susceptibility assessment using GIS and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Y. Li

    2012-08-01

    Full Text Available A GIS-based method for the assessment of landslide susceptibility in a selected area of Qingchuan County in China is proposed by using the back-propagation Artificial Neural Network model (ANN. Landslide inventory was derived from field investigation and aerial photo interpretation. 473 landslides occurred before the Wenchuan earthquake (which were thought as rainfall-induced landslides (RIL in this study, and 885 earthquake-induced landslides (EIL were recorded into the landslide inventory map. To understand the different impacts of rainfall and earthquake on landslide occurrence, we first compared the variations between landslide spatial distribution and conditioning factors. Then, we compared the weight variation of each conditioning factor derived by adjusting ANN structure and factors combination respectively. Last, the weight of each factor derived from the best prediction model was applied to the entire study area to produce landslide susceptibility maps.

    Results show that slope gradient has the highest weight for landslide susceptibility mapping for both RIL and EIL. The RIL model built with four different factors (slope gradient, elevation, slope height and distance to the stream shows the best success rate of 93%; the EIL model built with five different factors (slope gradient, elevation, slope height, distance to the stream and distance to the fault has the best success rate of 98%. Furthermore, the EIL data was used to verify the RIL model and the success rate is 92%; the RIL data was used to verify the EIL model and the success rate is 53%.

  19. Towards Integrated Marmara Strong Motion Network

    Science.gov (United States)

    Durukal, E.; Erdik, M.; Safak, E.; Ansal, A.; Ozel, O.; Alcik, H.; Mert, A.; Kafadar, N.; Korkmaz, A.; Kurtulus, A.

    2009-04-01

    Array (72 ch. dense accelerometric array to be installed in 2010) - Gemlik Array (a dense basin array of 8 stations, to be installed in 2010) The objectives of these systems and networks are: (1) to produce rapid earthquake intensity, damage and loss assessment information after an earthquake (in the case of IERREWS), (2) to monitor conditions of structural systems, (3) to develop real-time data processing, analysis, and damage detection and location tools (in the case of structural networks) after an extreme event, (4) to assess spatial properties of strong ground motion and ground strain, and to characterise basin response (in the case of special arrays), (5) to investigate site response and wave propagation (in the case of vertical array). Ground motion data obtained from these strong motion networks have and are being used for investigations of attenuation, spatial variation (coherence), simulation benchmarking, source modeling, site response, seismic microzonation, system identification and structural model verification and structural health control. In addition to the systems and networks outlined above there are two temporary networks: KIMNET - a dense urban noise and microtremor network consisting of 50 broadband stations expected to be operational in mid 2009, and SOSEWIN - a 20-station, self-organizing structural integrated array at Ataköy in Istanbul.

  20. The network construction of CSELF for earthquake monitoring and its preliminary observation

    Science.gov (United States)

    Tang, J.; Zhao, G.; Chen, X.; Bing, H.; Wang, L.; Zhan, Y.; Xiao, Q.; Dong, Z.

    2017-12-01

    The Electromagnetic (EM) anomaly in short-term earthquake precursory is most sensitive physical phenomena. Scientists believe that EM monitoring for earthquake is one of the most promising means of forecasting. However, existing ground-base EM observation confronted with increasing impact cultural noises, and the lack of a frequency range of higher than 1Hz observations. Control source of extremely low frequency (CSELF) EM is a kind of good prospective new approach. It not only has many advantages with high S/N ratio, large coverage area, probing depth ect., thereby facilitating the identification and capture anomaly signal, and it also can be used to study the electromagnetic field variation and to study the crustal medium changes of the electric structure.The first CSELF EM network for earthquake precursory monitoring with 30 observatories in China has been constructed. The observatories distribute in Beijing surrounding area and in the southern part of North-South Seismic Zone. GMS-07 system made by Metronix is equipped at each station. The observation mixed CSELF and nature source, that is, if during the control source is off transmitted, the nature source EM signal will be recorded. In genernal, there are 3 5 frequencies signals in the 0.1-300Hz frequency band will be transmit in every morning and evening in a fixed time (length 2 hours). Besides time, natural field to extend the frequency band (0.001 1000 Hz) will be observed by using 3 sample frequencies, 4096Hz sampling rate for HF, 256Hz for MF and 16Hz for LF. The low frequency band records continuously all-day and the high and medium frequency band use a slices record, the data records by cycling acquisition in every 10 minutes with length of about 4 to 8 seconds and 64 to 128 seconds , respectively. All the data is automatically processed by server installed in the observatory. The EDI file including EM field spectrums and MT responses and time series files will be sent the data center by internet

  1. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    Czech Academy of Sciences Publication Activity Database

    Fojtíková, Lucia; Kristeková, M.; Málek, Jiří; Sokos, E.; Csicsay, K.; Zahradník, J.

    2016-01-01

    Roč. 20, č. 1 (2016), 93-106 ISSN 1383-4649 R&D Projects: GA ČR GAP210/12/2336 Institutional support: RVO:67985891 Keywords : Focal-mechanism uncertainty * Little Carpathians * Relative location uncertainty * Seismic network * Uncertainty mapping * Waveform inversion * Weak earthquake s Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.089, year: 2016

  2. Application of τc*Pd for identifying damaging earthquakes for earthquake early warning

    Science.gov (United States)

    Huang, P. L.; Lin, T. L.; Wu, Y. M.

    2014-12-01

    Earthquake Early Warning System (EEWS) is an effective approach to mitigate earthquake damage. In this study, we used the seismic record by the Kiban Kyoshin network (KiK-net), because it has dense station coverage and co-located borehole strong-motion seismometers along with the free-surface strong-motion seismometers. We used inland earthquakes with moment magnitude (Mw) from 5.0 to 7.3 between 1998 and 2012. We choose 135 events and 10950 strong ground accelerograms recorded by the 696 strong ground accelerographs. Both the free-surface and the borehole data are used to calculate τc and Pd, respectively. The results show that τc*Pd has a good correlation with PGV and is a robust parameter for assessing the potential of damaging earthquake. We propose the value of τc*Pd determined from seconds after the arrival of P wave could be a threshold for the on-site type of EEW.

  3. Predicting Dynamic Response of Structures under Earthquake Loads Using Logical Analysis of Data

    Directory of Open Access Journals (Sweden)

    Ayman Abd-Elhamed

    2018-04-01

    Full Text Available In this paper, logical analysis of data (LAD is used to predict the seismic response of building structures employing the captured dynamic responses. In order to prepare the data, computational simulations using a single degree of freedom (SDOF building model under different ground motion records are carried out. The selected excitation records are real and of different peak ground accelerations (PGA. The sensitivity of the seismic response in terms of displacements of floors to the variation in earthquake characteristics, such as soil class, characteristic period, and time step of records, peak ground displacement, and peak ground velocity, have also been considered. The dynamic equation of motion describing the building model and the applied earthquake load are presented and solved incrementally using the Runge-Kutta method. LAD then finds the characteristic patterns which lead to forecast the seismic response of building structures. The accuracy of LAD is compared to that of an artificial neural network (ANN, since the latter is the most known machine learning technique. Based on the conducted study, the proposed LAD model has been proven to be an efficient technique to learn, simulate, and blindly predict the dynamic response behaviour of building structures subjected to earthquake loads.

  4. Earthquake Clusters and Spatio-temporal Migration of earthquakes in Northeastern Tibetan Plateau: a Finite Element Modeling

    Science.gov (United States)

    Sun, Y.; Luo, G.

    2017-12-01

    Seismicity in a region is usually characterized by earthquake clusters and earthquake migration along its major fault zones. However, we do not fully understand why and how earthquake clusters and spatio-temporal migration of earthquakes occur. The northeastern Tibetan Plateau is a good example for us to investigate these problems. In this study, we construct and use a three-dimensional viscoelastoplastic finite-element model to simulate earthquake cycles and spatio-temporal migration of earthquakes along major fault zones in northeastern Tibetan Plateau. We calculate stress evolution and fault interactions, and explore effects of topographic loading and viscosity of middle-lower crust and upper mantle on model results. Model results show that earthquakes and fault interactions increase Coulomb stress on the neighboring faults or segments, accelerating the future earthquakes in this region. Thus, earthquakes occur sequentially in a short time, leading to regional earthquake clusters. Through long-term evolution, stresses on some seismogenic faults, which are far apart, may almost simultaneously reach the critical state of fault failure, probably also leading to regional earthquake clusters and earthquake migration. Based on our model synthetic seismic catalog and paleoseismic data, we analyze probability of earthquake migration between major faults in northeastern Tibetan Plateau. We find that following the 1920 M 8.5 Haiyuan earthquake and the 1927 M 8.0 Gulang earthquake, the next big event (M≥7) in northeastern Tibetan Plateau would be most likely to occur on the Haiyuan fault.

  5. Earthquake early warning system using real-time signal processing

    Energy Technology Data Exchange (ETDEWEB)

    Leach, R.R. Jr.; Dowla, F.U.

    1996-02-01

    An earthquake warning system has been developed to provide a time series profile from which vital parameters such as the time until strong shaking begins, the intensity of the shaking, and the duration of the shaking, can be derived. Interaction of different types of ground motion and changes in the elastic properties of geological media throughout the propagation path result in a highly nonlinear function. We use neural networks to model these nonlinearities and develop learning techniques for the analysis of temporal precursors occurring in the emerging earthquake seismic signal. The warning system is designed to analyze the first-arrival from the three components of an earthquake signal and instantaneously provide a profile of impending ground motion, in as little as 0.3 sec after first ground motion is felt at the sensors. For each new data sample, at a rate of 25 samples per second, the complete profile of the earthquake is updated. The profile consists of a magnitude-related estimate as well as an estimate of the envelope of the complete earthquake signal. The envelope provides estimates of damage parameters, such as time until peak ground acceleration (PGA) and duration. The neural network based system is trained using seismogram data from more than 400 earthquakes recorded in southern California. The system has been implemented in hardware using silicon accelerometers and a standard microprocessor. The proposed warning units can be used for site-specific applications, distributed networks, or to enhance existing distributed networks. By producing accurate, and informative warnings, the system has the potential to significantly minimize the hazards of catastrophic ground motion. Detailed system design and performance issues, including error measurement in a simple warning scenario are discussed in detail.

  6. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    Science.gov (United States)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  7. Navigating Earthquake Physics with High-Resolution Array Back-Projection

    Science.gov (United States)

    Meng, Lingsen

    Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The

  8. Research on Collection of Earthquake Disaster Information from the Crowd

    Science.gov (United States)

    Nian, Z.

    2017-12-01

    In China, the assessment of the earthquake disasters information is mainly based on the inversion of the seismic source mechanism and the pre-calculated population data model, the real information of the earthquake disaster is usually collected through the government departments, the accuracy and the speed need to be improved. And in a massive earthquake like the one in Mexico, the telecommunications infrastructure on ground were damaged , the quake zone was difficult to observe by satellites and aircraft in the bad weather. Only a bit of information was sent out through maritime satellite of other country. Thus, the timely and effective development of disaster relief was seriously affected. Now Chinese communication satellites have been orbiting, people don't only rely on the ground telecom base station to keep communication with the outside world, to open the web page,to land social networking sites, to release information, to transmit images and videoes. This paper will establish an earthquake information collection system which public can participate. Through popular social platform and other information sources, the public can participate in the collection of earthquake information, and supply quake zone information, including photos, video, etc.,especially those information made by unmanned aerial vehicle (uav) after earthqake, the public can use the computer, potable terminals, or mobile text message to participate in the earthquake information collection. In the system, the information will be divided into earthquake zone basic information, earthquake disaster reduction information, earthquake site information, post-disaster reconstruction information etc. and they will been processed and put into database. The quality of data is analyzed by multi-source information, and is controlled by local public opinion on them to supplement the data collected by government departments timely and implement the calibration of simulation results ,which will better guide

  9. Automated Determination of Magnitude and Source Length of Large Earthquakes

    Science.gov (United States)

    Wang, D.; Kawakatsu, H.; Zhuang, J.; Mori, J. J.; Maeda, T.; Tsuruoka, H.; Zhao, X.

    2017-12-01

    Rapid determination of earthquake magnitude is of importance for estimating shaking damages, and tsunami hazards. However, due to the complexity of source process, accurately estimating magnitude for great earthquakes in minutes after origin time is still a challenge. Mw is an accurate estimate for large earthquakes. However, calculating Mw requires the whole wave trains including P, S, and surface phases, which takes tens of minutes to reach stations at tele-seismic distances. To speed up the calculation, methods using W phase and body wave are developed for fast estimating earthquake sizes. Besides these methods that involve Green's Functions and inversions, there are other approaches that use empirically simulated relations to estimate earthquake magnitudes, usually for large earthquakes. The nature of simple implementation and straightforward calculation made these approaches widely applied at many institutions such as the Pacific Tsunami Warning Center, the Japan Meteorological Agency, and the USGS. Here we developed an approach that was originated from Hara [2007], estimating magnitude by considering P-wave displacement and source duration. We introduced a back-projection technique [Wang et al., 2016] instead to estimate source duration using array data from a high-sensitive seismograph network (Hi-net). The introduction of back-projection improves the method in two ways. Firstly, the source duration could be accurately determined by seismic array. Secondly, the results can be more rapidly calculated, and data derived from farther stations are not required. We purpose to develop an automated system for determining fast and reliable source information of large shallow seismic events based on real time data of a dense regional array and global data, for earthquakes that occur at distance of roughly 30°- 85° from the array center. This system can offer fast and robust estimates of magnitudes and rupture extensions of large earthquakes in 6 to 13 min (plus

  10. Ground-Motion Simulations of the 2008 Ms8.0 Wenchuan, China, Earthquake Using Empirical Green's Function Method

    Science.gov (United States)

    Zhang, W.; Zhang, Y.; Yao, X.

    2010-12-01

    On May 12, 2008, a huge earthquake with magnitude Ms8.0 occurred in the Wenhuan, Sichuan Province of China. This event was the most devastating earthquake in the mainland of China since the 1976 M7.8 Tangshan earthquake. It resulted in tremendous losses of life and property. There were about 90,000 persons killed. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and “quake lakes” which formed by landslide-induced reservoirs. This earthquake occurred along the Longmenshan fault, as the result of motion on a northeast striking reverse fault or thrust fault on the northwestern margin of the Sichuan Basin. The earthquake's epicenter and focal-mechanism are consistent with it having occurred as the result of movement on the Longmenshan fault or a tectonically related fault. The earthquake reflects tectonic stresses resulting from the convergence of crustal material slowly moving from the high Tibetan Plateau, to the west, against strong crust underlying the Sichuan Basin and southeastern China. In this study, we simulate the near-field strong ground motions of this great event based on the empirical Green’s function method (EGF). Referring to the published inversion source models, at first, we assume that there are three asperities on the rupture area and choose three different small events as the EGFs. Then, we identify the parameters of the source model using a genetic algorithm (GA). We calculate the synthetic waveforms based on the obtained source model and compare with the observed records. Our result shows that for most of the synthetic waveforms agree very well with the observed ones. The result proves the validity and the stability of the method. Finally, we forward the near-field strong ground motions near the source region and try to explain the damage distribution caused by the great earthquake.

  11. The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake

    Science.gov (United States)

    Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena

    2017-04-01

    very low rupture velocity. The low rupture velocity can mean slow-faulting, which brings to slow release of accumulated seismic energy. The slow release energy does principally little to moderate damages. Additionally wave form of the earthquake shows low frequency content of P-waves (the maximum P-wave is at 1.19 Hz) and the specific P- wave displacement spectral is characterise with not expressed spectrum plateau and corner frequency. These and other signs suggest us to the conclusion, that the 2012 Mw5.6 earthquake can be considered as types of slow earthquake, like a low frequency quake. The study is based on data from Bulgarian seismological network (NOTSSI), the local network (LSN) deployed around Kozloduy NPP and System of Accelerographs for Seismic Monitoring of Equipment and Structures (SASMES) installed in the Kozloduy NPP. NOTSSI jointly with LSN and SASMES provide reliable information for multiple studies on seismicity in regional scale.

  12. Demonstration of pb-PSHA with Ras-Elhekma earthquake, Egypt

    Directory of Open Access Journals (Sweden)

    Elsayed Fergany

    2017-06-01

    Full Text Available The main goal of this work is to: (1 argue for the importance of a physically-based probabilistic seismic hazard analysis (pb-PSHA methodology and show examples to support the argument from recent events, (2 demonstrate the methodology with the ground motion simulations of May 28, 1998, Mw = 5.5 Ras-Elhekma earthquake, north Egypt. The boundaries for the possible rupture parameters that may have been identified prior to the 1998 Ras-Elhekma earthquake were estimated. A range of simulated ground-motions for the Ras-Elhekma earthquake was “predicted” for frequency 0.5–25 Hz at three sites, where the large earthquake was recorded, with average epicentral distances of 220 km. The best rupture model of the 1998 Ras-Elhekma earthquake was identified by calculated the goodness of fit between observed and synthesized records at sites FYM, HAG, and KOT. We used the best rupture scenario of the 1998 earthquake to synthesize the ground motions at interested sites where the main shock was not recorded. Based on the good fit of simulated and observed seismograms, we concluded that this methodology can provide realistic ground motion of an earthquake and highly recommended for engineering purposes in advance or foregoing large earthquakes at non record sites. We propose that there is a need for this methodology for good-representing the true hazard with reducing uncertainties.

  13. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  14. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  15. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip

  16. Modeling of earthquake ground motion in the frequency domain

    Science.gov (United States)

    Thrainsson, Hjortur

    In recent years, the utilization of time histories of earthquake ground motion has grown considerably in the design and analysis of civil structures. It is very unlikely, however, that recordings of earthquake ground motion will be available for all sites and conditions of interest. Hence, there is a need for efficient methods for the simulation and spatial interpolation of earthquake ground motion. In addition to providing estimates of the ground motion at a site using data from adjacent recording stations, spatially interpolated ground motions can also be used in design and analysis of long-span structures, such as bridges and pipelines, where differential movement is important. The objective of this research is to develop a methodology for rapid generation of horizontal earthquake ground motion at any site for a given region, based on readily available source, path and site characteristics, or (sparse) recordings. The research includes two main topics: (i) the simulation of earthquake ground motion at a given site, and (ii) the spatial interpolation of earthquake ground motion. In topic (i), models are developed to simulate acceleration time histories using the inverse discrete Fourier transform. The Fourier phase differences, defined as the difference in phase angle between adjacent frequency components, are simulated conditional on the Fourier amplitude. Uniformly processed recordings from recent California earthquakes are used to validate the simulation models, as well as to develop prediction formulas for the model parameters. The models developed in this research provide rapid simulation of earthquake ground motion over a wide range of magnitudes and distances, but they are not intended to replace more robust geophysical models. In topic (ii), a model is developed in which Fourier amplitudes and Fourier phase angles are interpolated separately. A simple dispersion relationship is included in the phase angle interpolation. The accuracy of the interpolation

  17. Event-based simulation of networks with pulse delayed coupling

    Science.gov (United States)

    Klinshov, Vladimir; Nekorkin, Vladimir

    2017-10-01

    Pulse-mediated interactions are common in networks of different nature. Here we develop a general framework for simulation of networks with pulse delayed coupling. We introduce the discrete map governing the dynamics of such networks and describe the computation algorithm for its numerical simulation.

  18. Contribution of the Surface and Down-Hole Seismic Networks to the Location of Earthquakes at the Soultz-sous-Forêts Geothermal Site (France)

    Science.gov (United States)

    Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.

    2018-03-01

    Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity

  19. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  20. Harnessing the Collective Power of Eyewitnesses for Improved Earthquake Information

    Science.gov (United States)

    Bossu, R.; Lefebvre, S.; Mazet-Roux, G.; Steed, R.

    2013-12-01

    The Euro-Med Seismological Centre (EMSC) operates the second global earthquake information website (www.emsc-csem.org) which attracts 2 million visits a month from about 200 different countries. We collect information about earthquakes' effects from eyewitnesses such as online questionnaires, geolocated pics to rapidly constrain impact scenario. At the beginning, the collection was purely intended to address a scientific issue: the rapid evaluation of earthquake's impact. However, it rapidly appears that the understanding of eyewitnesses' expectations and motivations in the immediate aftermath of an earthquake was essential to optimise this data collection. Crowdsourcing information on earthquake's effects does not apply to a pre-existing community. By definition, eyewitnesses only exist once the earthquake has struck. We developed a strategy on social networks (Facebook, Google+, Twitter...) to interface with spontaneously emerging online communities of eyewitnesses. The basic idea is to create a positive feedback loop: attract eyewitnesses and engage with them by providing expected earthquake information and services, collect their observations, collate them for improved earthquake information services to attract more witnesses. We will present recent examples to illustrate how important the use of social networks is to engage with eyewitnesses especially in regions of low seismic activity where people are unaware of existing Internet resources dealing with earthquakes. A second type of information collated in our information services is derived from the real time analysis of the traffic on our website in the first minutes following an earthquake occurrence, an approach named flashsourcing. We show, using the example of the Mineral, Virginia earthquake that the arrival times of eyewitnesses of our website follows the propagation of the generated seismic waves and then, that eyewitnesses can be considered as ground motion sensors. Flashsourcing discriminates felt

  1. Numerical tsunami simulations in the western Pacific Ocean and East China Sea from hypothetical M 9 earthquakes along the Nankai trough

    Science.gov (United States)

    Harada, Tomoya; Satake, Kenji; Furumura, Takashi

    2017-04-01

    We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai

  2. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  3. Spatial Distribution of earthquakes off the coast of Fukushima Two Years after the M9 Earthquake: the Southern Area of the 2011 Tohoku Earthquake Rupture Zone

    Science.gov (United States)

    Yamada, T.; Nakahigashi, K.; Shinohara, M.; Mochizuki, K.; Shiobara, H.

    2014-12-01

    Huge earthquakes cause vastly stress field change around the rupture zones, and many aftershocks and other related geophysical phenomenon such as geodetic movements have been observed. It is important to figure out the time-spacious distribution during the relaxation process for understanding the giant earthquake cycle. In this study, we pick up the southern rupture area of the 2011 Tohoku earthquake (M9.0). The seismicity rate keeps still high compared with that before the 2011 earthquake. Many studies using ocean bottom seismometers (OBSs) have been doing since soon after the 2011 Tohoku earthquake in order to obtain aftershock activity precisely. Here we show one of the studies at off the coast of Fukushima which is located on the southern part of the rupture area caused by the 2011 Tohoku earthquake. We deployed 4 broadband type OBSs (BBOBSs) and 12 short-period type OBSs (SOBS) in August 2012. Other 4 BBOBSs attached with absolute pressure gauges and 20 SOBSs were added in November 2012. We recovered 36 OBSs including 8 BBOBSs in November 2013. We selected 1,000 events in the vicinity of the OBS network based on a hypocenter catalog published by the Japan Meteorological Agency, and extracted the data after time corrections caused by each internal clock. Each P and S wave arrival times, P wave polarity and maximum amplitude were picked manually on a computer display. We assumed one dimensional velocity structure based on the result from an active source experiment across our network, and applied time corrections every station for removing ambiguity of the assumed structure. Then we adopted a maximum-likelihood estimation technique and calculated the hypocenters. The results show that intensive activity near the Japan Trench can be seen, while there was a quiet seismic zone between the trench zone and landward high activity zone.

  4. Seismological investigation of earthquakes in the New Madrid Seismic Zone

    International Nuclear Information System (INIS)

    Herrmann, R.B.; Nguyen, B.

    1993-08-01

    Earthquake activity in the New Madrid Seismic Zone had been monitored by regional seismic networks since 1975. During this time period, over 3,700 earthquakes have been located within the region bounded by latitudes 35 degrees--39 degrees N and longitudes 87 degrees--92 degrees W. Most of these earthquakes occur within a 1.5 degrees x 2 degrees zone centered on the Missouri Bootheel. Source parameters of larger earthquakes in the zone and in eastern North America are determined using surface-wave spectral amplitudes and broadband waveforms for the purpose of determining the focal mechanism, source depth and seismic moment. Waveform modeling of broadband data is shown to be a powerful tool in defining these source parameters when used complementary with regional seismic network data, and in addition, in verifying the correctness of previously published focal mechanism solutions

  5. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  6. Ionospheric earthquake effects detection based on Total Electron Content (TEC) GPS Correlation

    Science.gov (United States)

    Sunardi, Bambang; Muslim, Buldan; Eka Sakya, Andi; Rohadi, Supriyanto; Sulastri; Murjaya, Jaya

    2018-03-01

    Advances in science and technology showed that ground-based GPS receiver was able to detect ionospheric Total Electron Content (TEC) disturbances caused by various natural phenomena such as earthquakes. One study of Tohoku (Japan) earthquake, March 11, 2011, magnitude M 9.0 showed TEC fluctuations observed from GPS observation network spread around the disaster area. This paper discussed the ionospheric earthquake effects detection using TEC GPS data. The case studies taken were Kebumen earthquake, January 25, 2014, magnitude M 6.2, Sumba earthquake, February 12, 2016, M 6.2 and Halmahera earthquake, February 17, 2016, M 6.1. TEC-GIM (Global Ionosphere Map) correlation methods for 31 days were used to monitor TEC anomaly in ionosphere. To ensure the geomagnetic disturbances due to solar activity, we also compare with Dst index in the same time window. The results showed anomalous ratio of correlation coefficient deviation to its standard deviation upon occurrences of Kebumen and Sumba earthquake, but not detected a similar anomaly for the Halmahera earthquake. It was needed a continous monitoring of TEC GPS data to detect the earthquake effects in ionosphere. This study giving hope in strengthening the earthquake effect early warning system using TEC GPS data. The method development of continuous TEC GPS observation derived from GPS observation network that already exists in Indonesia is needed to support earthquake effects early warning systems.

  7. Adaptive Gain Scheduled Semiactive Vibration Control Using a Neural Network

    Directory of Open Access Journals (Sweden)

    Kazuhiko Hiramoto

    2018-01-01

    Full Text Available We propose an adaptive gain scheduled semiactive control method using an artificial neural network for structural systems subject to earthquake disturbance. In order to design a semiactive control system with high control performance against earthquakes with different time and/or frequency properties, multiple semiactive control laws with high performance for each of multiple earthquake disturbances are scheduled with an adaptive manner. Each semiactive control law to be scheduled is designed based on the output emulation approach that has been proposed by the authors. As the adaptive gain scheduling mechanism, we introduce an artificial neural network (ANN. Input signals of the ANN are the measured earthquake disturbance itself, for example, the acceleration, velocity, and displacement. The output of the ANN is the parameter for the scheduling of multiple semiactive control laws each of which has been optimized for a single disturbance. Parameters such as weight and bias in the ANN are optimized by the genetic algorithm (GA. The proposed design method is applied to semiactive control design of a base-isolated building with a semiactive damper. With simulation study, the proposed adaptive gain scheduling method realizes control performance exceeding single semiactive control optimizing the average of the control performance subject to various earthquake disturbances.

  8. Mw 8.5 BENGKULU EARTHQUAKES FROM CONTINUOUS GPS DATA

    Directory of Open Access Journals (Sweden)

    W. A. W. Aris

    2016-09-01

    Full Text Available The Mw 8.5 Bengkulu earthquake of 30 September 2007 and the Mw8.6 28 March 2005 are considered amongst large earthquake ever recorded in Southeast Asia. The impact into tectonic deformation was recorded by a network of Global Positioning System (GPS Continuously Operating Reference Station (CORS within southern of Sumatra and west-coast of Peninsular Malaysia. The GPS data from the GPS CORS network has been deployed to investigate the characteristic of postseismic deformation due to the earthquakes. Analytical logarithmic and exponential function was applied to investigate the deformation decay period of postseismic deformation. This investigation provides a preliminary insight into postseismic cycle along the Sumatra subduction zone in particular and on the dynamics Peninsular Malaysia in general.

  9. Smartphone MEMS accelerometers and earthquake early warning

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.; Kwon, Y. W.

    2015-12-01

    The low cost MEMS accelerometers in the smartphones are attracting more and more attentions from the science community due to the vast number and potential applications in various areas. We are using the accelerometers inside the smartphones to detect the earthquakes. We did shake table tests to show these accelerometers are also suitable to record large shakings caused by earthquakes. We developed an android app - MyShake, which can even distinguish earthquake movements from daily human activities from the recordings recorded by the accelerometers in personal smartphones and upload trigger information/waveform to our server for further analysis. The data from these smartphones forms a unique datasets for seismological applications, such as earthquake early warning. In this talk I will layout the method we used to recognize earthquake-like movement from single smartphone, and the overview of the whole system that harness the information from a network of smartphones for rapid earthquake detection. This type of system can be easily deployed and scaled up around the global and provides additional insights of the earthquake hazards.

  10. Modeling of a historical earthquake in Erzincan, Turkey (Ms 7.8, in 1939) using regional seismological information obtained from a recent event

    Science.gov (United States)

    Karimzadeh, Shaghayegh; Askan, Aysegul

    2018-04-01

    Located within a basin structure, at the conjunction of North East Anatolian, North Anatolian and Ovacik Faults, Erzincan city center (Turkey) is one of the most hazardous regions in the world. Combination of the seismotectonic and geological settings of the region has resulted in series of significant seismic activities including the 1939 (Ms 7.8) as well as the 1992 (Mw = 6.6) earthquakes. The devastative 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms. Thus, a limited number of studies exist on that earthquake. However, the 1992 event, despite the sparse local network at that time, has been studied extensively. This study aims to simulate the 1939 Erzincan earthquake using available regional seismic and geological parameters. Despite several uncertainties involved, such an effort to quantitatively model the 1939 earthquake is promising, given the historical reports of extensive damage and fatalities in the area. The results of this study are expressed in terms of anticipated acceleration time histories at certain locations, spatial distribution of selected ground motion parameters and felt intensity maps in the region. Simulated motions are first compared against empirical ground motion prediction equations derived with both local and global datasets. Next, anticipated intensity maps of the 1939 earthquake are obtained using local correlations between peak ground motion parameters and felt intensity values. Comparisons of the estimated intensity distributions with the corresponding observed intensities indicate a reasonable modeling of the 1939 earthquake.

  11. Detection of Repeating Earthquakes within the Cascadia Subduction Zone Using 2013-2014 Cascadia Initiative Amphibious Network Data

    Science.gov (United States)

    Kenefic, L.; Morton, E.; Bilek, S.

    2017-12-01

    It is well known that subduction zones create the largest earthquakes in the world, like the magnitude 9.5 Chile earthquake in 1960, or the more recent 9.1 magnitude Japan earthquake in 2011, both of which are in the top five largest earthquakes ever recorded. However, off the coast of the Pacific Northwest region of the U.S., the Cascadia subduction zone (CSZ) remains relatively quiet and modern seismic instruments have not recorded earthquakes of this size in the CSZ. The last great earthquake, a magnitude 8.7-9.2, occurred in 1700 and is constrained by written reports of the resultant tsunami in Japan and dating a drowned forest in the U.S. Previous studies have suggested the margin is most likely segmented along-strike. However, variations in frictional conditions in the CSZ fault zone are not well known. Geodetic modeling indicates that the locked seismogenic zone is likely completely offshore, which may be too far from land seismometers to adequately detect related seismicity. Ocean bottom seismometers, as part of the Cascadia Initiative Amphibious Network, were installed directly above the inferred seismogenic zone, which we use to better detect small interplate seismicity. Using the subspace detection method, this study looks to find new seismogenic zone earthquakes. This subspace detection method uses multiple previously known event templates concurrently to scan through continuous seismic data. Template events that make up the subspace are chosen from events in existing catalogs that likely occurred along the plate interface. Corresponding waveforms are windowed on the nearby Cascadia Initiative ocean bottom seismometers and coastal land seismometers for scanning. Detections that are found by the scan are similar to the template waveforms based upon a predefined threshold. Detections are then visually examined to determine if an event is present. The presence of repeating event clusters can indicate persistent seismic patches, likely corresponding to

  12. Satellite Geodetic Constraints On Earthquake Processes: Implications of the 1999 Turkish Earthquakes for Fault Mechanics and Seismic Hazards on the San Andreas Fault

    Science.gov (United States)

    Reilinger, Robert

    2005-01-01

    Our principal activities during the initial phase of this project include: 1) Continued monitoring of postseismic deformation for the 1999 Izmit and Duzce, Turkey earthquakes from repeated GPS survey measurements and expansion of the Marmara Continuous GPS Network (MAGNET), 2) Establishing three North Anatolian fault crossing profiles (10 sitedprofile) at locations that experienced major surface-fault earthquakes at different times in the past to examine strain accumulation as a function of time in the earthquake cycle (2004), 3) Repeat observations of selected sites in the fault-crossing profiles (2005), 4) Repeat surveys of the Marmara GPS network to continue to monitor postseismic deformation, 5) Refining block models for the Marmara Sea seismic gap area to better understand earthquake hazards in the Greater Istanbul area, 6) Continuing development of models for afterslip and distributed viscoelastic deformation for the earthquake cycle. We are keeping close contact with MIT colleagues (Brad Hager, and Eric Hetland) who are developing models for S. California and for the earthquake cycle in general (Hetland, 2006). In addition, our Turkish partners at the Marmara Research Center have undertaken repeat, micro-gravity measurements at the MAGNET sites and have provided us estimates of gravity change during the period 2003 - 2005.

  13. Earthquake Source Parameters Inferred from T-Wave Observations

    Science.gov (United States)

    Perrot, J.; Dziak, R.; Lau, T. A.; Matsumoto, H.; Goslin, J.

    2004-12-01

    The seismicity of the North Atlantic Ocean has been recorded by two networks of autonomous hydrophones moored within the SOFAR channel on the flanks of the Mid-Atlantic Ridge (MAR). In February 1999, a consortium of U.S. investigators (NSF and NOAA) deployed a 6-element hydrophone array for long-term monitoring of MAR seismicity between 15o-35oN south of the Azores. In May 2002, an international collaboration of French, Portuguese, and U.S. researchers deployed a 6-element hydrophone array north of the Azores Plateau from 40o-50oN. The northern network (referred to as SIRENA) was recovered in September 2003. The low attenuation properties of the SOFAR channel for earthquake T-wave propagation results in a detection threshold reduction from a magnitude completeness level (Mc) of ˜ 4.7 for MAR events recorded by the land-based seismic networks to Mc=3.0 using hydrophone arrays. Detailed focal depth and mechanism information, however, remain elusive due to the complexities of seismo-acoustic propagation paths. Nonetheless, recent analyses (Dziak, 2001; Park and Odom, 2001) indicate fault parameter information is contained within the T-wave signal packet. We investigate this relationship further by comparing an earthquake's T-wave duration and acoustic energy to seismic magnitude (NEIC) and radiation pattern (for events M>5) from the Harvard moment-tensor catalog. First results show earthquake energy is well represented by the acoustic energy of the T-waves, however T-wave codas are significantly influenced by acoustic propagation effects and do not allow a direct determination of the seismic magnitude of the earthquakes. Second, there appears to be a correlation between T-wave acoustic energy, azimuth from earthquake source to the hydrophone, and the radiation pattern of the earthquake's SH waves. These preliminary results indicate there is a relationship between the T-wave observations and earthquake source parameters, allowing for additional insights into T

  14. Analysis of Time Delay Simulation in Networked Control System

    OpenAIRE

    Nyan Phyo Aung; Zaw Min Naing; Hla Myo Tun

    2016-01-01

    The paper presents a PD controller for the Networked Control Systems (NCS) with delay. The major challenges in this networked control system (NCS) are the delay of the data transmission throughout the communication network. The comparative performance analysis is carried out for different delays network medium. In this paper, simulation is carried out on Ac servo motor control system using CAN Bus as communication network medium. The True Time toolbox of MATLAB is used for simulation to analy...

  15. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  16. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman

  17. Real-Time Earthquake Monitoring with Spatio-Temporal Fields

    Science.gov (United States)

    Whittier, J. C.; Nittel, S.; Subasinghe, I.

    2017-10-01

    With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.

  18. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    Science.gov (United States)

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  19. An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan

    Science.gov (United States)

    Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.

    2017-12-01

    The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.

  20. Implication of conjugate faulting in the earthquake brewing and originating process

    Energy Technology Data Exchange (ETDEWEB)

    Jones, L.M. (Massachusetts Inst. of Tech., Cambridge); Deng, Q.; Jiang, P.

    1980-03-01

    The earthquake sequence, precursory and geologo-structural background of the Haicheng, Tangshan, Songpan-Pingwu earthquakes are discussed in this article. All of these earthquakes occurred in a seismic zone controlled by the main boundary faults of an intraplate fault block. However, the fault plane of a main earthquake does not consist of the same faults, but is rather a related secondary fault. They formed altogether a conjugate shearing rupture zone under the action of a regional tectonic stress field. As to the earthquake sequence, the foreshocks and aftershocks might occur on the conjugate fault planes within an epicentral region rather than be limited to the fault plane of a main earthquake, such as the distribution of foreshocks and aftershocks of the Haicheng earthquake. The characteristics of the long-, medium-, and imminent-term earthquake precursory anomalies of the three mentioned earthquakes, especially the character of well-studies anomaly phenomena in electrical resistivity, radon emission, groundwater and animal behavior, have been investigated. The studies of these earthquake precursors show that they were distributed in an area rather more extensive than the epicentral region. Some fault zones in the conjugate fault network usually appeared as distributed belts or concentrated zones of earthquake precursory anomalies, and can be traced in the medium-long term precursory field, but seem more distinct in the short-imminent term precursory anomalous field. These characteristics can be explained by the rupture and sliding originating along the conjugate shear network and the concentration of stress in the regional stress field.

  1. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  2. Prompt Assessment of Global Earthquakes for Response (PAGER): A System for Rapidly Determining the Impact of Earthquakes Worldwide

    Science.gov (United States)

    Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy

    2009-01-01

    Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.

  3. Dynamic strains for earthquake source characterization

    Science.gov (United States)

    Barbour, Andrew J.; Crowell, Brendan W

    2017-01-01

    Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.

  4. Napa Earthquake impact on water systems

    Science.gov (United States)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  5. STEADY-STATE modeling and simulation of pipeline networks for compressible fluids

    Directory of Open Access Journals (Sweden)

    A.L.H. Costa

    1998-12-01

    Full Text Available This paper presents a model and an algorithm for the simulation of pipeline networks with compressible fluids. The model can predict pressures, flow rates, temperatures and gas compositions at any point of the network. Any network configuration can be simulated; the existence of cycles is not an obstacle. Numerical results from simulated data on a proposed network are shown for illustration. The potential of the simulator is explored by the analysis of a pressure relief network, using a stochastic procedure for the evaluation of system performance.

  6. Numerical simulation for gas-liquid two-phase flow in pipe networks

    International Nuclear Information System (INIS)

    Li Xiaoyan; Kuang Bo; Zhou Guoliang; Xu Jijun

    1998-01-01

    The complex pipe network characters can not directly presented in single phase flow, gas-liquid two phase flow pressure drop and void rate change model. Apply fluid network theory and computer numerical simulation technology to phase flow pipe networks carried out simulate and compute. Simulate result shows that flow resistance distribution is non-linear in two phase pipe network

  7. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  8. Solar eruptions - soil radon - earthquakes

    International Nuclear Information System (INIS)

    Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.

    2004-01-01

    For the first time a new natural phenomenon was established: a contrasting increase in the soil radon level under the influence of solar flares. Such an increase is one of geochemical indicators of earthquakes. Most researchers consider this a phenomenon of exclusively terrestrial processes. Investigations regarding the link of earthquakes to solar activity carried out during the last decade in different countries are based on the analysis of statistical data ΣΕ (t) and W (t). As established, the overall seismicity of the Earth and its separate regions depends of an 11-year long cycle of solar activity. Data provided in the paper based on experimental studies serve the first step on the way of experimental data on revealing cause-and-reason solar-terrestrials bonds in a series s olar eruption-lithosphere radon-earthquakes . They need further collection of experimental data. For the first time, through radon constituent of terrestrial radiation objectification has been made of elementary lattice of the Hartmann's network contoured out by bio location method. As found out, radon concentration variations in Hartmann's network nodes determine the dynamics of solar-terrestrial relationships. Of the three types of rapidly running processes conditioned by solar-terrestrial bonds earthquakes are attributed to rapidly running destructive processes that occur in the most intense way at the juncture of tectonic massifs, along transformed and deep failures. The basic factors provoking the earthquakes are both magnetic-structural effects and a long-term (over 5 months) bombing of the surface of lithosphere by highly energetic particles of corpuscular solar flows, this being approved by photometry. As a result of solar flares that occurred from 29 October to 4 November 2003, a sharply contrasting increase in soil radon was established which is an earthquake indicator on the territory of Yerevan City. A month and a half later, earthquakes occurred in San-Francisco, Iran, Turkey

  9. Use of Ground Motion Simulations of a Historical Earthquake for the Assessment of Past and Future Urban Risks

    Science.gov (United States)

    Kentel, E.; Çelik, A.; karimzadeh Naghshineh, S.; Askan, A.

    2017-12-01

    Erzincan city located in the Eastern part of Turkey at the conjunction of three active faults is one of the most hazardous regions in the world. In addition to several historical events, this city has experienced one of the largest earthquakes during the last century: The 27 December 1939 (Ms=8.0) event. With limited knowledge of the tectonic structure by then, the city center was relocated to the North after the 1939 earthquake by almost 5km, indeed closer to the existing major strike slip fault. This decision coupled with poor construction technologies, led to severe damage during a later event that occurred on 13 March 1992 (Mw=6.6). The 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms whereas the 1992 event was only recorded by 3 nearby stations. There are empirical isoseismal maps from both events indicating indirectly the spatial distribution of the damage. In this study, we focus on this region and present a multidisciplinary approach to discuss the different components of uncertainties involved in the assessment and mitigation of seismic risk in urban areas. For this initial attempt, ground motion simulation of the 1939 event is performed to obtain the anticipated ground motions and shaking intensities. Using these quantified results along with the spatial distribution of the observed damage, the relocation decision is assessed and suggestions are provided for future large earthquakes to minimize potential earthquake risks.

  10. The key role of eyewitnesses in rapid earthquake impact assessment

    Science.gov (United States)

    Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Frédéric; Etivant, Caroline

    2014-05-01

    Uncertainties in rapid earthquake impact models are intrinsically large even when excluding potential indirect losses (fires, landslides, tsunami…). The reason is that they are based on several factors which are themselves difficult to constrain, such as the geographical distribution of shaking intensity, building type inventory and vulnerability functions. The difficulties can be illustrated by two boundary cases. For moderate (around M6) earthquakes, the size of potential damage zone and the epicentral location uncertainty share comparable dimension of about 10-15km. When such an earthquake strikes close to an urban area, like in 1999, in Athens (M5.9), earthquake location uncertainties alone can lead to dramatically different impact scenario. Furthermore, for moderate magnitude, the overall impact is often controlled by individual accidents, like in 2002 in Molise, Italy (M5.7), in Bingol, Turkey (M6.4) in 2003 or in Christchurch, New Zealand (M6.3) where respectively 23 out of 30, 84 out of 176 and 115 out of 185 of the causalities perished in a single building failure. Contrastingly, for major earthquakes (M>7), the point source approximation is not valid anymore, and impact assessment requires knowing exactly where the seismic rupture took place, whether it was unilateral, bilateral etc.… and this information is not readily available directly after the earthquake's occurrence. In-situ observations of actual impact provided by eyewitnesses can dramatically reduce impact models uncertainties. We will present the overall strategy developed at the EMSC which comprises of crowdsourcing and flashsourcing techniques, the development of citizen operated seismic networks, and the use of social networks to engage with eyewitnesses within minutes of an earthquake occurrence. For instance, testimonies are collected through online questionnaires available in 32 languages and automatically processed in maps of effects. Geo-located pictures are collected and then

  11. Earthquake Catalogue of the Caucasus

    Science.gov (United States)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  12. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  13. Numerical simulation of co-seismic deformation of 2011 Japan Mw9. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Zhang Keliang

    2011-08-01

    Full Text Available Co-seismic displacements associated with the Mw9. 0 earthquake on March 11, 2011 in Japan are numerically simulated on the basis of a finite-fault dislocation model with PSGRN/PSCMP software. Compared with the inland GPS observation, 90% of the computed eastward, northward and vertical displacements have residuals less than 0.10 m, suggesting that the simulated results can be, to certain extent, used to demonstrate the co-seismic deformation in the near field. In this model, the maximum eastward displacement increases from 6 m along the coast to 30 m near the epicenter, where the maximum southward displacement is 13 m. The three-dimensional display shows that the vertical displacement reaches a maximum uplift of 14.3 m, which is comparable to the tsunami height in the near-trench region. The maximum subsidence is 5.3 m.

  14. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    Science.gov (United States)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  15. Network simulation of nonstationary ionic transport through liquid junctions

    International Nuclear Information System (INIS)

    Castilla, J.; Horno, J.

    1993-01-01

    Nonstationary ionic transport across the liquid junctions has been studied using Network Thermodynamics. A network model for the time-dependent Nernst-Plack-Poisson system of equation is proposed. With this network model and the electrical circuit simulation program PSPICE, the concentrations, charge density, and electrical potentials, at short times, have been simulated for the binary system NaCl/NaCl. (Author) 13 refs

  16. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  17. Seismogeodetic monitoring techniques for tsunami and earthquake early warning and rapid assessment of structural damage

    Science.gov (United States)

    Haase, J. S.; Bock, Y.; Saunders, J. K.; Goldberg, D.; Restrepo, J. I.

    2016-12-01

    As part of an effort to promote the use of NASA-sponsored Earth science information for disaster risk reduction, real-time high-rate seismogeodetic data are being incorporated into early warning and structural monitoring systems. Seismogeodesy combines seismic acceleration and GPS displacement measurements using a tightly-coupled Kalman filter to provide absolute estimates of seismic acceleration, velocity and displacement. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. Real-time seismogeodetic observations at subduction zones allow for more robust and rapid magnitude and slip estimation that increase warning time in the near-source region. A NASA-funded effort to utilize GPS and seismogeodesy in NOAA's Tsunami Warning Centers in Alaska and Hawaii integrates new modules for picking, locating, and estimating magnitudes and moment tensors for earthquakes into the USGS earthworm environment at the TWCs. In a related project, NASA supports the transition of this research to seismogeodetic tools for disaster preparedness, specifically by implementing GPS and low-cost MEMS accelerometers for structural monitoring in partnership with earthquake engineers. Real-time high-rate seismogeodetic structural monitoring has been implemented on two structures. The first is a parking garage at the Autonomous University of Baja California Faculty of Medicine in Mexicali, not far from the rupture of the 2011 Mw 7.2 El Mayor Cucapah earthquake enabled through a UCMexus collaboration. The second is the 8-story Geisel Library at University of California, San Diego (UCSD). The system has also been installed for several proof-of-concept experiments at the UCSD Network for Earthquake Engineering Simulation (NEES) Large High Performance Outdoor Shake Table. We present MEMS-based seismogeodetic observations from the 10 June

  18. Earthquake and nuclear explosion location using the global seismic network

    International Nuclear Information System (INIS)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30 0 distances, the largest deviation being around 10 seconds at 13-18 0 . At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations

  19. Earthquake and nuclear explosion location using the global seismic network

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30/sup 0/ distances, the largest deviation being around 10 seconds at 13-18/sup 0/. At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations.

  20. BioNessie - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X.; Jiang, J.; Ajayi, O.; Gu, X.; Gilbert, D.; Sinnott, R.O.

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations.

  1. Earthquake Swarm in Armutlu Peninsula, Eastern Marmara Region, Turkey

    Science.gov (United States)

    Yavuz, Evrim; Çaka, Deniz; Tunç, Berna; Serkan Irmak, T.; Woith, Heiko; Cesca, Simone; Lühr, Birger-Gottfried; Barış, Şerif

    2015-04-01

    The most active fault system of Turkey is North Anatolian Fault Zone and caused two large earthquakes in 1999. These two earthquakes affected the eastern Marmara region destructively. Unbroken part of the North Anatolian Fault Zone crosses north of Armutlu Peninsula on east-west direction. This branch has been also located quite close to Istanbul known as a megacity with its high population, economic and social aspects. A new cluster of microseismic activity occurred in the direct vicinity southeastern of the Yalova Termal area. Activity started on August 2, 2014 with a series of micro events, and then on August 3, 2014 a local magnitude is 4.1 event occurred, more than 1000 in the followed until August 31, 2014. Thus we call this tentatively a swarm-like activity. Therefore, investigation of the micro-earthquake activity of the Armutlu Peninsula has become important to understand the relationship between the occurrence of micro-earthquakes and the tectonic structure of the region. For these reasons, Armutlu Network (ARNET), installed end of 2005 and equipped with currently 27 active seismic stations operating by Kocaeli University Earth and Space Sciences Research Center (ESSRC) and Helmholtz-Zentrum Potsdam Deutsches GeoForschungsZentrum (GFZ), is a very dense network tool able to record even micro-earthquakes in this region. In the 30 days period of August 02 to 31, 2014 Kandilli Observatory and Earthquake Research Institute (KOERI) announced 120 local earthquakes ranging magnitudes between 0.7 and 4.1, but ARNET provided more than 1000 earthquakes for analyzes at the same time period. In this study, earthquakes of the swarm area and vicinity regions determined by ARNET were investigated. The focal mechanism of the August 03, 2014 22:22:42 (GMT) earthquake with local magnitude (Ml) 4.0 is obtained by the moment tensor solution. According to the solution, it discriminates a normal faulting with dextral component. The obtained focal mechanism solution is

  2. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  3. Uganda's participation in CTBT activities and earthquake monitoring

    International Nuclear Information System (INIS)

    Tugume, F.A.

    2002-01-01

    Earthquake occurrence in Uganda is mostly related to East Africa Rift System. The country's western border lies within the Western branch of this system while the Eastern branch is only 200 km from its eastern border. The two tectonic features contribute to seismicity in Uganda. These are the Aswar shear zone running from Nimule at the border of Uganda and Sudan, to Mount Elgon on the Eastern border and Katonga fault break which cuts across the country from the foot hills of mount Rwenzori to the Western side of Lake Victoria. This unique tectonic setting makes Uganda one of most seismically active countries on the African continet as exemplified by some destructive earthquakes that have hit the country. For this reason the Government of uganda is in the process of setting up an earthquake monitoring system, the National Seismological Network, with efficient detectability, efficient data transmission and processing facilities so that earthquakes in Uganda can be properly assessed and seismic hazard studies of the country cunducted. The objectives of the said network, the seismic developments for the last two decades and its current satus are described

  4. Inversion of GPS-measured coseismic displacements for source parameters of Taiwan earthquake

    Science.gov (United States)

    Lin, J. T.; Chang, W. L.; Hung, H. K.; Yu, W. C.

    2016-12-01

    We performed a method of determining earthquake location, focal mechanism, and centroid moment tensor by coseismic surface displacements from daily and high-rate GPS measurements. Unlike commonly used dislocation model where fault geometry is calculated nonlinearly, our method makes a point source approach to evaluate these parameters in a solid and efficient way without a priori fault information and can thus provide constrains to subsequent finite source modeling of fault slip. In this study, we focus on the resolving ability of GPS data for moderate (Mw=6.0 7.0) earthquakes in Taiwan, and four earthquakes were investigated in detail: the March 27 2013 Nantou (Mw=6.0), the June 2 2013 Nantou (Mw=6.3) , the October 31 2013 Ruisui (Mw=6.3), and the March 31 2002 Hualien (ML=6.8) earthquakes. All these events were recorded by the Taiwan continuous GPS network with data sampling rates of 30-second and 1 Hz, where the Mw6.3 Ruisui earthquake was additionally recorded by another local GPS network with a sampling rate of 20 Hz. Our inverted focal mechanisms of all these earthquakes are consistent with the results of GCMT and USGS that evaluates source parameters by dynamic information from seismic waves. We also successfully resolved source parameters of the Mw6.3 Ruisui earthquake within only 10 seconds following the earthquake occurrence, demonstrating the potential of high-rate GPS data on earthquake early warning and real-time determination of earthquake source parameters.

  5. Electrostatically actuated resonant switches for earthquake detection

    KAUST Repository

    Ramini, Abdallah H.

    2013-04-01

    The modeling and design of electrostatically actuated resonant switches (EARS) for earthquake and seismic applications are presented. The basic concepts are based on operating an electrically actuated resonator close to instability bands of frequency, where it is forced to collapse (pull-in) if operated within these bands. By careful tuning, the resonator can be made to enter the instability zone upon the detection of the earthquake signal, thereby pulling-in as a switch. Such a switching action can be functionalized for useful functionalities, such as shutting off gas pipelines in the case of earthquakes, or can be used to activate a network of sensors for seismic activity recording in health monitoring applications. By placing a resonator on a printed circuit board (PCB) of a natural frequency close to that of the earthquake\\'s frequency, we show significant improvement on the detection limit of the EARS lowering it considerably to less than 60% of the EARS by itself without the PCB. © 2013 IEEE.

  6. Computational Approach for Improving Three-Dimensional Sub-Surface Earth Structure for Regional Earthquake Hazard Simulations in the San Francisco Bay Area

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-25

    In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, but this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.

  7. A gene network simulator to assess reverse engineering algorithms.

    Science.gov (United States)

    Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2009-03-01

    In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.

  8. GPS Monitoring of Surface Change During and Following the Fortuitous Occurrence of the M(sub w) = 7.3 Landers Earthquake in our Network

    Science.gov (United States)

    Miller, M. Meghan

    1998-01-01

    Accomplishments: (1) Continues GPS monitoring of surface change during and following the fortuitous occurrence of the M(sub w) = 7.3 Landers earthquake in our network, in order to characterize earthquake dynamics and accelerated activity of related faults as far as 100's of kilometers along strike. (2) Integrates the geodetic constraints into consistent kinematic descriptions of the deformation field that can in turn be used to characterize the processes that drive geodynamics, including seismic cycle dynamics. In 1991, we installed and occupied a high precision GPS geodetic network to measure transform-related deformation that is partitioned from the Pacific - North America plate boundary northeastward through the Mojave Desert, via the Eastern California shear zone to the Walker Lane. The onset of the M(sub w) = 7.3 June 28, 1992, Landers, California, earthquake sequence within this network poses unique opportunities for continued monitoring of regional surface deformation related to the culmination of a major seismic cycle, characterization of the dynamic behavior of continental lithosphere during the seismic sequence, and post-seismic transient deformation. During the last year, we have reprocessed all three previous epochs for which JPL fiducial free point positioning products available and are queued for the remaining needed products, completed two field campaigns monitoring approx. 20 sites (October 1995 and September 1996), begun modeling by development of a finite element mesh based on network station locations, and developed manuscripts dealing with both the Landers-related transient deformation at the latitude of Lone Pine and the velocity field of the whole experiment. We are currently deploying a 1997 observation campaign (June 1997). We use GPS geodetic studies to characterize deformation in the Mojave Desert region and related structural domains to the north, and geophysical modeling of lithospheric behavior. The modeling is constrained by our

  9. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  10. Along-strike Variations in the Himalayas Illuminated by the Aftershock Sequence of the 2015 Mw 7.8 Gorkha Earthquake Using the NAMASTE Local Seismic Network

    Science.gov (United States)

    Mendoza, M.; Ghosh, A.; Karplus, M. S.; Nabelek, J.; Sapkota, S. N.; Adhikari, L. B.; Klemperer, S. L.; Velasco, A. A.

    2016-12-01

    As a result of the 2015 Mw 7.8 Gorkha earthquake, more than 8,000 people were killed from a combination of infrastructure failure and triggered landslides. This earthquake produced 4 m of peak co-seismic slip as the fault ruptured 130 km east under densely populated cities, such as Kathmandu. To understand earthquake dynamics in this part of the Himalayas and help mitigate similar future calamities by the next destructive event, it is imperative to study earthquake activities in detail and improve our understanding of the source and structural complexities. In response to the Gorkha event, multiple institutions developed and deployed a 10-month long dense seismic network called NAMASTE. It blanketed a 27,650 km2 area, mainly covering the rupture area of the Gorkha earthquake, in order to capture the dynamic sequence of aftershock behavior. The network consisted of a mix of 45 broadband, short-period, and strong motion sensors, with an average spacing of 20 km. From the first 6 months of data, starting approximately 1.5 after the mainshock, we develop a robust catalog containing over 3,000 precise earthquake locations, and local magnitudes that range between 0.3 and 4.9. The catalog has a magnitude of completeness of 1.5, and an overall low b-value of 0.78. Using the HypoDD algorithm, we relocate earthquake hypocenters with high precision, and thus illustrate the fault geometry down to depths of 25 km where we infer the location of the gently-dipping Main Frontal Thrust (MFT). Above the MFT, the aftershocks illuminate complex structure produced by relatively steeply dipping faults. Interestingly, we observe sharp along-strike change in the seismicity pattern. The eastern part of the aftershock area is significantly more active than the western part. The change in seismicity may reflect structural and/or frictional lateral heterogeneity in this part of the Himalayan fault system. Such along-strike variations play an important role in rupture complexities and

  11. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  12. Transforming network simulation data to semantic data for network attack planning

    CSIR Research Space (South Africa)

    Chan, Ke Fai Peter

    2017-03-01

    Full Text Available study was performed, using the Common Open Research Emulator (CORE), to generate the necessary network simulation data. The simulation data was analysed, and then transformed into linked data. The result of the transformation is a data file that adheres...

  13. Far field tsunami simulations of the 1755 Lisbon earthquake: Implications for tsunami hazard to the U.S. East Coast and the Caribbean

    Science.gov (United States)

    Barkan, R.; ten Brink, Uri S.; Lin, J.

    2009-01-01

    The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East

  14. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  15. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  16. Earthquake Risk Mitigation in the Tokyo Metropolitan area

    Science.gov (United States)

    Hirata, N.; Sakai, S.; Kasahara, K.; Nakagawa, S.; Nanjo, K.; Panayotopoulos, Y.; Tsuruoka, H.

    2010-12-01

    Seismic disaster risk mitigation in urban areas constitutes a challenge through collaboration of scientific, engineering, and social-science fields. Examples of collaborative efforts include research on detailed plate structure with identification of all significant faults, developing dense seismic networks; strong ground motion prediction, which uses information on near-surface seismic site effects and fault models; earthquake resistant and proof structures; and cross-discipline infrastructure for effective risk mitigation just after catastrophic events. Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (magnitude M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that the M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. This earthquake is evaluated to occur with a probability of 70% in 30 years by the Earthquake Research Committee of Japan. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area (2007-2011) was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. The results that are obtained in the respective fields will be integrated until project termination to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area. In this talk, we give an outline of our project as an example of collaborative research on earthquake risk mitigation. Discussion is extended to our effort in progress and

  17. Development of a hybrid earthquake early warning system based on single sensor technique

    International Nuclear Information System (INIS)

    Gravirov, V.V.; Kislov, K.V.

    2012-01-01

    There are two methods to earthquake early warning system: the method based on a network of seismic stations and the single-sensor method. Both have advantages and drawbacks. The current systems rely on high density seismic networks. Attempts at implementing techniques based on the single-station principle encounter difficulties in the identification of earthquake in noise. The noise may be very diverse, from stationary to impulsive. It seems a promising line of research to develop hybrid warning systems with single-sensors being incorporated in the overall early warning network. This will permit using all advantages and will help reduce the radius of the hazardous zone where no earthquake warning can be produced. The main problems are highlighted and the solutions of these are discussed. The system is implemented to include three detection processes in parallel. The first is based on the study of the co-occurrence matrix of the signal wavelet transform. The second consists in using the method of a change point in a random process and signal detection in a moving time window. The third uses artificial neural networks. Further, applying a decision rule out the final earthquake detection is carried out and estimate its reliability. (author)

  18. Evaluation of steam generator tube integrity during earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Kusakabe, Takaya; Kodama, Toshio [Mitsubishi Heavy Industries Ltd., Kobe (Japan). Kobe Shipyard and Machinery Works; Takamatsu, Hiroshi; Matsunaga, Tomoya

    1999-07-01

    This report shows an experimental study on the strength of PWR steam generator (SG) tubes with various defects under cyclic loads which simulate earthquakes. The tests were done using same SG tubing as actual plants with axial and circumferential defects with various length and depth. In the tests, straight tubes were loaded with cyclic bending moments to simulate earthquake waves and number of load cycles at which tube leak started or tube burst was counted. The test results showed that even tubes with very long crack made by EDM more than 80% depth could stand the maximum earthquake, and tubes with corrosion crack were far stronger than those. Thus the integrity of SG tubes with minute potential defects was demonstrated. (author)

  19. Comparison of Structurally Controlled Landslide Hazard Simulation to the Co-seismic Landslides Caused by the M 7.2 2013 Bohol Earthquake.

    Science.gov (United States)

    Galang, J. A. M. B.; Eco, R. C.; Lagmay, A. M. A.

    2014-12-01

    The M_w 7.2 October 15, 2013 Bohol earthquake is one of the more destructive earthquake to hit the Philippines in the 21st century. The epicenter was located in Sagbayan municipality, central Bohol and was generated by a previously unmapped reverse fault called the "Inabanga Fault". The earthquake resulted in 209 fatalities and over 57 million USD worth of damages. The earthquake generated co-seismic landslides most of which were related to fault structures. Unlike rainfall induced landslides, the trigger for co-seismic landslides happen without warning. Preparations for this type of landslides rely heavily on the identification of fracture-related slope instability. To mitigate the impacts of co-seismic landslide hazards, morpho-structural orientations of discontinuity sets were mapped using remote sensing techniques with the aid of a Digital Terrain Model (DTM) obtained in 2012. The DTM used is an IFSAR derived image with a 5-meter pixel resolution and approximately 0.5 meter vertical accuracy. Coltop 3D software was then used to identify similar structures including measurement of their dip and dip directions. The chosen discontinuity sets were then keyed into Matterocking software to identify potential rock slide zones due to planar or wedged discontinuities. After identifying the structurally-controlled unstable slopes, the rock mass propagation extent of the possible rock slides was simulated using Conefall. Separately, a manually derived landslide inventory has been performed using post-earthquake satellite images and LIDAR. The results were compared to the landslide inventory which identified at least 873 landslides. Out of the 873 landslides identified through the inventory, 786 or 90% intersect the simulated structural-controlled landslide hazard areas of Bohol. The results show the potential of this method to identify co-seismic landslide hazard areas for disaster mitigation. Along with computer methods to simulate shallow landslides, and debris flow

  20. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    Energy Technology Data Exchange (ETDEWEB)

    Duru, Kenneth, E-mail: kduru@stanford.edu [Department of Geophysics, Stanford University, Stanford, CA (United States); Dunham, Eric M. [Department of Geophysics, Stanford University, Stanford, CA (United States); Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA (United States)

    2016-01-15

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture

  1. Seismic-resistant design of nuclear power stations in Japan, earthquake country. Lessons learned from Chuetsu-oki earthquake

    International Nuclear Information System (INIS)

    Irikura, Kojiro

    2008-01-01

    The new assessment (back-check) of earthquake-proof safety was being conducted at Kashiwazaki-Kariwa Nuclear Power Plants, Tokyo Electric Co. in response to a request based on the guideline for reactor evaluation for seismic-resistant design code, revised in 2006, when the 2007 Chuetsu-oki Earthquake occurred and brought about an unexpectedly huge tremor in this area, although the magnitude of the earthquake was only 6.8 but the intensity of earthquake motion exceeded 2.5-fold more than supposed. This paper introduces how and why the guideline for seismic-resistant design of nuclear facilities was revised in 2006, the outline of the Chuetsu-oki Earthquake, and preliminary findings and lessons learned from the Earthquake. The paper specifically discusses on (1) how we may specify in advance geologic active faults as has been overlooked this time, (2) how we can make adequate models for seismic origin from which we can extract its characteristics, and (3) how the estimation of strong ground motion simulation may be possible for ground vibration level of a possibly overlooked fault. (S. Ohno)

  2. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  3. Real-Time-Simulation of IEEE-5-Bus Network on OPAL-RT-OP4510 Simulator

    Science.gov (United States)

    Atul Bhandakkar, Anjali; Mathew, Lini, Dr.

    2018-03-01

    The Real-Time Simulator tools have high computing technologies, improved performance. They are widely used for design and improvement of electrical systems. The advancement of the software tools like MATLAB/SIMULINK with its Real-Time Workshop (RTW) and Real-Time Windows Target (RTWT), real-time simulators are used extensively in many engineering fields, such as industry, education, and research institutions. OPAL-RT-OP4510 is a Real-Time Simulator which is used in both industry and academia. In this paper, the real-time simulation of IEEE-5-Bus network is carried out by means of OPAL-RT-OP4510 with CRO and other hardware. The performance of the network is observed with the introduction of fault at various locations. The waveforms of voltage, current, active and reactive power are observed in the MATLAB simulation environment and on the CRO. Also, Load Flow Analysis (LFA) of IEEE-5-Bus network is computed using MATLAB/Simulink power-gui load flow tool.

  4. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  5. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  6. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  7. Characteristic behavior of water radon associated with Wenchuan and Lushan earthquakes along Longmenshan fault

    International Nuclear Information System (INIS)

    Ye, Qing; Singh, Ramesh P.; He, Anhua; Ji, Shouwen; Liu, Chunguo

    2015-01-01

    In China, numerous subsurface, surface water well and spring parameters are being monitored through a large network of stations distributed in China sponsored by China Earthquake Administration (CEA). All the data from these network is managed by China Earthquake Network Center (CENC). In this paper, we have used numerous data (water radon, gas radon, water level, water temperature) available through CENC for the period 2002–2014 and studied the behavior and characteristics of water 222 radon [Rn(w)]. The observed parameters were also complemented by rainfall data retrieved from Tropical Rainfall Measuring Mission (TRMM) satellite. Our detailed analysis shows pronounced changes in the observed parameters (especially water and gas radon) prior to the earthquake. The changes in water radon, ground water level and rainfall showing characteristics behavior for Wenchuan and Lushan earthquakes. The long term data analysis of water radon and water level at various locations around epicenters of two major earthquakes along Longmenshan fault show a positive and negative relation of water radon and water level prior to these earthquakes. It is difficult to find any trend of water radon and changes in water radon pattern with these two earthquakes that could prove as a reliable precursor of earthquakes. Changes in the water radon concentrations from one location to other may be associated with the changes in ground water regime and geological settings in the epicentral and surrounding regions. - Highlights: • Long trend of water radon measured in China during 2003–2014 at six stations round Longmenshan fault. • Water radon shows characteristics behavior associated with Wenchuan and Lushan earthquakes. • Water radon shows one to one relation with rainfall and ground water level variations. • Sharp increase or decrease in water radon concentrations are found few days prior to the earthquake

  8. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    OpenAIRE

    V. V. Plotkin

    2003-01-01

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with t...

  9. On the Diurnal Periodicity of Representative Earthquakes in Greece: Comparison of Data from Different Observation Systems

    Science.gov (United States)

    Desherevskii, A. V.; Sidorin, A. Ya.

    2017-12-01

    Due to the initiation of the Hellenic Unified Seismic Network (HUSN) in late 2007, the quality of observation significantly improved by 2011. For example, the representative magnitude level considerably has decreased and the number of annually recorded events has increased. The new observational system highly expanded the possibilities for studying regularities in seismicity. In view of this, the authors revisited their studies of the diurnal periodicity of representative earthquakes in Greece that was revealed earlier in the earthquake catalog before 2011. We use 18 samples of earthquakes of different magnitudes taken from the catalog of Greek earthquakes from 2011 to June 2016 to derive a series of the number of earthquakes for each of them and calculate its average diurnal course. To increase the reliability of the results, we compared the data for two regions. With a high degree of statistical significance, we have obtained that no diurnal periodicity can be found for strongly representative earthquakes. This finding differs from the estimates obtained earlier from an analysis of the catalog of earthquakes at the same area for 1995-2004 and 2005-2010, i.e., before the initiation of the Hellenic Unified Seismic Network. The new results are consistent with the hypothesis of noise discrimination (observational selection) explaining the cause of the diurnal variation of earthquakes with different sensitivity of the seismic network in daytime and nighttime periods.

  10. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  11. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    Science.gov (United States)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    concentration areas in the model, one is located in the mid and upper crust on the hanging wall where the strain energy could be released by permanent deformation like folding, and the other lies in the deep part of the fault where the strain energy could be released by earthquakes. (5) The whole earthquake dynamic process could be clearly reflected by the evolutions of the strain energy increments on the stages of the earthquake cycle. In the inter-seismic period, the strain energy accumulates relatively slowly; prior to the earthquake, the fault is locking and the strain energy accumulates fast, and some of the strain energy is released on the upper crust on the hanging wall of the fault. In coseismic stage, the strain energy is released fast along the fault. In the poseismic stage, the slow accumulation process of strain recovers rapidly as that in the inerseismic period in around one hundred years. The simulation study in this thesis would help better understand the earthquake dynamic process.

  12. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  13. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  14. Evaluation of the Pseudostatic Analyses of Earth Dams Using FE Simulation and Observed Earthquake-Induced Deformations: Case Studies of Upper San Fernando and Kitayama Dams

    Directory of Open Access Journals (Sweden)

    Tohid Akhlaghi

    2014-01-01

    Full Text Available Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed.

  15. A dynamic model of liquid containers (tanks) with legs and probability analysis of response to simulated earthquake

    International Nuclear Information System (INIS)

    Fujita, Takafumi; Shimosaka, Haruo

    1980-01-01

    This paper is described on the results of analysis of the response of liquid containers (tanks) to earthquakes. Sine wave oscillation was applied experimentally to model tanks with legs. A model with one degree of freedom is good enough for the analysis. To investigate the reason of this fact, the response multiplication factor of tank displacement was analysed. The shapes of the model tanks were rectangular and cylindrical. Analyses were made by a potential theory. The experimental studies show that the characteristics of attenuation of oscillation was non-linear. The model analysis of this non-linear attenuation was also performed. Good agreement between the experimental and the analytical results was recognized. The probability analysis of the response to earthquake with simulated shock waves was performed, using the above mentioned model, and good agreement between the experiment and the analysis was obtained. (Kato, T.)

  16. Slip reactivation during the 2011 Tohoku earthquake: Dynamic rupture and ground motion simulations

    Science.gov (United States)

    Galvez, P.; Dalguer, L. A.

    2013-12-01

    The 2011 Mw9 Tohoku earthquake generated such as vast geophysical data that allows studying with an unprecedented resolution the spatial-temporal evolution of the rupture process of a mega thrust event. Joint source inversion of teleseismic, near-source strong motion and coseismic geodetic data , e.g [Lee et. al, 2011], reveal an evidence of slip reactivation process at areas of very large slip. The slip of snapshots of this source model shows that after about 40 seconds the big patch above to the hypocenter experienced an additional push of the slip (reactivation) towards the trench. These two possible repeating slip exhibited by source inversions can create two waveform envelops well distinguished in the ground motion pattern. In fact seismograms of the KiK-Net Japanese network contained this pattern. For instance a seismic station around Miyagi (MYGH10) has two main wavefronts separated between them by 40 seconds. A possible physical mechanism to explain the slip reactivation could be a thermal pressurization process occurring in the fault zone. In fact, Kanamori & Heaton, (2000) proposed that for large earthquakes frictional melting and fluid pressurization can play a key role of the rupture dynamics of giant earthquakes. If fluid exists in a fault zone, an increase of temperature can rise up the pore pressure enough to significantly reduce the frictional strength. Therefore, during a large earthquake the areas of big slip persuading strong thermal pressurization may result in a second drop of the frictional strength after reaching a certain value of slip. Following this principle, we adopt for slip weakening friction law and prescribe a certain maximum slip after which the friction coefficient linearly drops down again. The implementation of this friction law has been done in the latest unstructured spectral element code SPECFEM3D, Peter et. al. (2012). The non-planar subduction interface has been taken into account and place on it a big asperity patch inside

  17. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    Directory of Open Access Journals (Sweden)

    Jan Hahne

    2017-05-01

    Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  18. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  19. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  20. A simulation of earthquake induced undrained pore pressure ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Plains, Kandla River and Gulf of Kachch, between .... We consider the role of induced pore pressure ... location of the Bhuj earthquake epicentre as estimated by US Geological Survey. .... war R 2001 Changes in Ocean; GIS @ development 5.

  1. Sensitivity of broad-band ground-motion simulations to earthquake source and Earth structure variations: an application to the Messina Straits (Italy)

    KAUST Repository

    Imperatori, W.; Mai, Paul Martin

    2012-01-01

    We find that ground-motion variability associated to differences in crustal models is constant and becomes important at intermediate and long periods. On the other hand, source-induced ground-motion variability is negligible at long periods and strong at intermediate-short periods. Using our source-modelling approach and the three different 1-D structural models, we investigate shaking levels for the 1908 Mw 7.1 Messina earthquake adopting a recently proposed model for fault geometry and final slip. Our simulations suggest that peak levels in Messina and Reggio Calabria must have reached 0.6-0.7 g during this earthquake.

  2. Analyses of computer programs for the probabilistic estimation of design earthquake and seismological characteristics of the Korean Peninsula

    International Nuclear Information System (INIS)

    Lee, Gi Hwa

    1997-11-01

    The purpose of the present study is to develop predictive equations from simulated motions which are adequate for the Korean Peninsula and analyze and utilize the computer programs for the probabilistic estimation of design earthquakes. In part I of the report, computer programs for the probabilistic estimation of design earthquake are analyzed and applied to the seismic hazard characterizations in the Korean Peninsula. In part II of the report, available instrumental earthquake records are analyzed to estimate earthquake source characteristics and medium properties, which are incorporated into simulation process. And earthquake records are simulated by using the estimated parameters. Finally, predictive equations constructed from the simulation are given in terms of magnitude and hypocentral distances

  3. Applicability and economic efficiency of earthquake retrofit measures on existing buildings in Bucharest, Romania

    Science.gov (United States)

    Bostenaru, M.

    2009-04-01

    The research discussed in this contribution contains two aspects: on one side the economic efficiency of seismic retrofit measures, and on the other their applicability. The research was limited to housing buildings. Bucharest, the capital of Romania, was the object of the research. Strong earthquakes affect Bucharest about three times in a century, the damaging earthquakes of the 20th century being in 1940 and 1977. Other strong earthquakes occurred in 1986 and 1990. Since it is a broad topic, first the building type was determined, which should serve further research. For this scope the building types of the 20th century, which are common in Bucharest, Romania, were investigated. For each building type reports have been written, which comprised the earthquake resilient features, the seismic defficiencies, the damage patterns and the retrofit measures. Each of these features was listed for elements of the building. A first result of the research was an integrated system in order to include latter aspects in the planning in the first steps. So already at the building survey attention has to be paid on how a building is subdivided in order to be able to determine the economic efficiency of the planned action. So were defined the `retrofit elements`. In a first step the characteristics were defined, through which these retrofit elements (for example column, wall part between two windows) can be recognised in the building survey. In a further one, which retrofit measures can be connected to these. Diagrams were built, in order to visualise these findings. For each retrofit element and the corresponding measure the costs were calculated. Also, these retrofit elements and the measures connected to them were modelled for the simulation with the structural software, so that the benefit of the measures could be determined. In the part which regarded the economic efficiency, benefits and costs of retrofit measures had to be compared, so the improvement in the rigidity

  4. Composite Earthquake Catalog of the Yellow Sea for Seismic Hazard Studies

    Science.gov (United States)

    Kang, S. Y.; Kim, K. H.; LI, Z.; Hao, T.

    2017-12-01

    The Yellow Sea (a.k.a West Sea in Korea) is an epicontinental and semi-closed sea located between Korea and China. Recent earthquakes in the Yellow Sea including, but not limited to, the Seogyuckryulbi-do (1 April 2014, magnitude 5.1), Heuksan-do (21 April 2013, magnitude 4.9), Baekryung-do (18 May 2013, magnitude 4.9) earthquakes, and the earthquake swarm in the Boryung offshore region in 2013, remind us of the seismic hazards affecting east Asia. This series of earthquakes in the Yellow Sea raised numerous questions. Unfortunately, both governments have trouble in monitoring seismicity in the Yellow Sea because earthquakes occur beyond their seismic networks. For example, the epicenters of the magnitude 5.1 earthquake in the Seogyuckryulbi-do region in 2014 reported by the Korea Meteorological Administration and China Earthquake Administration differed by approximately 20 km. This illustrates the difficulty with seismic monitoring and locating earthquakes in the region, despite the huge effort made by both governments. Joint effort is required not only to overcome the limits posed by political boundaries and geographical location but also to study seismicity and the underground structures responsible. Although the well-established and developing seismic networks in Korea and China have provided unprecedented amount and quality of seismic data, high quality catalog is limited to the recent 10s of years, which is far from major earthquake cycle. It is also noticed the earthquake catalog from either country is biased to its own and cannot provide complete picture of seismicity in the Yellow Sea. In order to understand seismic hazard and tectonics in the Yellow Sea, a composite earthquake catalog has been developed. We gathered earthquake information during last 5,000 years from various sources. There are good reasons to believe that some listings account for same earthquake, but in different source parameters. We established criteria in order to provide consistent

  5. Tsunami simulation of 2011 Tohoku-Oki Earthquake. Evaluation of difference in tsunami wave pressure acting around Fukushima Daiichi Nuclear Power Station and Fukushima Daini Nuclear Power Station among different tsunami source models

    International Nuclear Information System (INIS)

    Fujihara, Satoru; Hashimoto, Norihiko; Korenaga, Mariko; Tamiya, Takahiro

    2016-01-01

    Since the 2011 Tohoku-Oki Earthquake, evaluations based on a tsunami simulation approach have had a very important role in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes. When considering tsunami disaster prevention measures based on the knowledge obtained from tsunami simulations, it is important to carefully examine the type of tsunami source model. In current tsunami simulations, there are various ways to set the tsunami source model, and a considerable difference in tsunami behavior can be expected among the tsunami source models. In this study, we carry out a tsunami simulation of the 2011 Tohoku-Oki Earthquake around Fukushima Daiichi (I) Nuclear Power Plant and Fukushima Daini (II) Nuclear Power Plant in Fukushima Prefecture, Japan, using several tsunami source models, and evaluate the difference in the tsunami behavior in the tsunami inundation process. The results show that for an incoming tsunami inundating an inland region, there are considerable relative differences in the maximum tsunami height and wave pressure. This suggests that there could be false information used in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes, depending on the tsunami source model. (author)

  6. Fracture Simulation of Highly Crosslinked Polymer Networks: Triglyceride-Based Adhesives

    Science.gov (United States)

    Lorenz, Christian; Stevens, Mark; Wool, Richard

    2003-03-01

    The ACRES program at the U. of Delaware has shown that triglyceride oils derived from plants are a favorable alternative to the traditional adhesives. The triglyceride networks are formed from an initial mixture of styrene monomers, free-radical initiators and triglycerides. We have performed simulations to study the effect of physical composition and physical characteristics of the triglyceride network on the strength of triglyceride network. A coarse-grained, bead-spring model of the triglyceride system is used. The average triglyceride consists of 6 beads per chain, the styrenes are represented as a single bead and the initiators are two bead chains. The polymer network is formed using an off-lattice 3D Monte Carlo simulation, in which the initiators activate the styrene and triglyceride reactive sites and then bonds are randomly formed between the styrene and active triglyceride monomers producing a highly crosslinked polymer network. Molecular dynamics simulations of the network under tensile and shear strains were performed to determine the strength as a function of the network composition. The relationship between the network structure and its strength will also be discussed.

  7. A Mw 6.3 earthquake scenario in the city of Nice (southeast France): ground motion simulations

    Science.gov (United States)

    Salichon, Jérome; Kohrs-Sansorny, Carine; Bertrand, Etienne; Courboulex, Françoise

    2010-07-01

    The southern Alps-Ligurian basin junction is one of the most seismically active zone of the western Europe. A constant microseismicity and moderate size events (3.5 case of an offshore Mw 6.3 earthquake located at the place where two moderate size events (Mw 4.5) occurred recently and where a morphotectonic feature has been detected by a bathymetric survey. We used a stochastic empirical Green’s functions (EGFs) summation method to produce a population of realistic accelerograms on rock and soil sites in the city of Nice. The ground motion simulations are calibrated on a rock site with a set of ground motion prediction equations (GMPEs) in order to estimate a reasonable stress-drop ratio between the February 25th, 2001, Mw 4.5, event taken as an EGF and the target earthquake. Our results show that the combination of the GMPEs and EGF techniques is an interesting tool for site-specific strong ground motion estimation.

  8. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  9. Challenges to communicate risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2014-12-01

    The awareness of natural hazards has been up-trending in recent years. In particular, this is true for earthquakes, which increase in frequency and magnitude in regions that normally do not experience seismic activity. In fact, one of the major concerns for many communities and businesses is that humans today seem to cause earthquakes due to large-scale shale gas production, dewatering and flooding of mines and deep geothermal power production. Accordingly, without opposing any of these technologies it should be a priority of earth scientists who are researching natural hazards to communicate earthquake risks. This presentation discusses the challenges that earth scientists are facing to properly communicate earthquake risks, in light of the fact that human-caused earthquakes are an environmental change affecting only some communities and businesses. Communication channels may range from research papers, books and class room lectures to outreach events and programs, popular media events or even social media networks.

  10. Ranking important nodes in complex networks by simulated annealing

    International Nuclear Information System (INIS)

    Sun Yu; Yao Pei-Yang; Shen Jian; Zhong Yun; Wan Lu-Jun

    2017-01-01

    In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. (paper)

  11. Physics of Earthquake Rupture Propagation

    Science.gov (United States)

    Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh

    2018-05-01

    A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.

  12. Epistemic uncertainty in California-wide synthetic seismicity simulations

    Science.gov (United States)

    Pollitz, Fred F.

    2011-01-01

    The generation of seismicity catalogs on synthetic fault networks holds the promise of providing key inputs into probabilistic seismic-hazard analysis, for example, the coefficient of variation, mean recurrence time as a function of magnitude, the probability of fault-to-fault ruptures, and conditional probabilities for foreshock–mainshock triggering. I employ a seismicity simulator that includes the following ingredients: static stress transfer, viscoelastic relaxation of the lower crust and mantle, and vertical stratification of elastic and viscoelastic material properties. A cascade mechanism combined with a simple Coulomb failure criterion is used to determine the initiation, propagation, and termination of synthetic ruptures. It is employed on a 3D fault network provided by Steve Ward (unpublished data, 2009) for the Southern California Earthquake Center (SCEC) Earthquake Simulators Group. This all-California fault network, initially consisting of 8000 patches, each of ∼12 square kilometers in size, has been rediscretized into Graphic patches, each of ∼1 square kilometer in size, in order to simulate the evolution of California seismicity and crustal stress at magnitude M∼5–8. Resulting synthetic seismicity catalogs spanning 30,000 yr and about one-half million events are evaluated with magnitude-frequency and magnitude-area statistics. For a priori choices of fault-slip rates and mean stress drops, I explore the sensitivity of various constructs on input parameters, particularly mantle viscosity. Slip maps obtained for the southern San Andreas fault show that the ability of segment boundaries to inhibit slip across the boundaries (e.g., to prevent multisegment ruptures) is systematically affected by mantle viscosity.

  13. Optimization of blanking process using neural network simulation

    International Nuclear Information System (INIS)

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  14. Variations of Background Seismic Noise Before Strong Earthquakes, Kamchatka.

    Science.gov (United States)

    Kasimova, V.; Kopylova, G.; Lyubushin, A.

    2017-12-01

    The network of broadband seismic stations of Geophysical Service (Russian Academy of Science) works on the territory of Kamchatka peninsula in the Far East of Russia. We used continuous records on Z-channels at 21 stations for creation of background seismic noise time series in 2011-2017. Average daily parameters of multi-fractal spectra of singularity have been calculated at each station using 1-minute records. Maps and graphs of their spatial distribution and temporal changes were constructed at time scales from days to several years. The analysis of the coherent behavior of the time series of the statistics was considered. The technique included the splitting of seismic network into groups of stations, taking into account the coastal effect, the network configuration and the main tectonic elements of Kamchatka. Then the time series of median values of noise parameters from each group of stations were made and the frequency-time diagrams of the evolution of the spectral measure of the coherent behavior of four time series were analyzed. The time intervals and frequency bands of the maximum values showing the increase of coherence in the changes of all statistics were evaluated. The strong earthquakes with magnitudes M=6.9-8.3 occurred near the Kamchatka peninsula during the observations. The synchronous variations of the background noise parameters and increase in the coherent behavior of the median values of statistical parameters was shown before two earthquakes 2013 (February 28, Mw=6.9; May 24, Mw=8.3) within 3-9 months and before earthquake of January 30, 2016, Mw=7.2 within 3-6 months. The maximum effect of increased coherence in the range of periods 4-5.5 days corresponds to the time of preparation of two strong earthquakes in 2013 and their aftershock processes. Peculiarities in changes of statistical parameters at stages of preparation of strong earthquakes indicate the attenuation in high-amplitude outliers and the loss of multi-fractal properties in

  15. Fault structure in the Nepal Himalaya as illuminated by aftershocks of the 2015 Mw 7.8 Gorkha earthquake recorded by the local NAMASTE network

    Science.gov (United States)

    Ghosh, A.; Mendoza, M.; LI, B.; Karplus, M. S.; Nabelek, J.; Sapkota, S. N.; Adhikari, L. B.; Klemperer, S. L.; Velasco, A. A.

    2017-12-01

    Geometry of the Main Himalayan Thrust (MHT), that accommodates majority of the plate motion between Indian and Eurasian plate, is being debated for a long time. Different models have been proposed; some of them are significantly different from others. Obtaining a well constrained geometry of the MHT is challenging mainly because of the lack of high quality data, inherent low resolution and non-uniqueness of the models. We used a dense local seismic network - NAMASTE - to record and analyze a prolific aftershock sequence following the 2015 Mw 7.8 Gorkha earthquake, and determine geometry of the MHT constrained by precisely located well-constrained aftershocks. We detected and located more than 15,000 aftershocks of the Gorkha earthquake using Hypoinverse and then relatively relocated using HypoDD algorithm. We selected about 7,000 earthquakes that are particularly well constrained to analyze the geometry of the megathrust. They illuminate fault structure in this part of the Himalaya with unprecedented detail. The MHT shows two subhorizontal planes connected by a duplex structure. The duplex structure is characterized by multiple steeply dipping planes. In addition, we used four large-aperture continental-scale seismic arrays at teleseismic distances to backproject high-frequency seismic radiation. Moreover, we combined all arrays to significantly increase the resolution and detectability. We imaged rupture propagation of the mainshock showing complexity near the end of the rupture that might help arresting of the rupture to the east. Furthermore, we continuously scanned teleseismic data for two weeks starting from immediately after the mainshock to detect and locate aftershock activity only using the arrays. Spatial pattern of the aftershocks was similar to the existing global catalog using conventional seismic network and technique. However, we detected more than twice as many aftershocks using the array technique compared to the global catalog including many

  16. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  17. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  18. Cooperative New Madrid seismic network

    International Nuclear Information System (INIS)

    Herrmann, R.B.; Johnston, A.C.

    1990-01-01

    The development and installation of components of a U.S. National Seismic Network (USNSN) in the eastern United States provides the basis for long term monitoring of eastern earthquakes. While the broad geographical extent of this network provides a uniform monitoring threshold for the purpose of identifying and locating earthquakes and while it will provide excellent data for defining some seismic source parameters for larger earthquakes through the use of waveform modeling techniques, such as depth and focal mechanism, by itself it will not be able to define the scaling of high frequency ground motions since it will not focus on any of the major seismic zones in the eastern U.S. Realizing this need and making use of a one time availability of funds for studying New Madrid earthquakes, Saint Louis University and Memphis State University successfully competed for funding in a special USGS RFP for New Madrid studies. The purpose of the proposal is to upgrade the present seismic networks run by these institutions in order to focus on defining the seismotectonics and ground motion scaling in the New Madrid Seismic Zone. The proposed network is designed both to complement the U.S. National Seismic Network and to make use of the capabilities of the communication links of that network

  19. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  20. Numerical simulations (2D) on the influence of pre-existing local structures and seismic source characteristics in earthquake-volcano interactions

    Science.gov (United States)

    Farías, Cristian; Galván, Boris; Miller, Stephen A.

    2017-09-01

    Earthquake triggering of hydrothermal and volcanic systems is ubiquitous, but the underlying processes driving these systems are not well-understood. We numerically investigate the influence of seismic wave interaction with volcanic systems simulated as a trapped, high-pressure fluid reservoir connected to a fluid-filled fault system in a 2-D poroelastic medium. Different orientations and earthquake magnitudes are studied to quantify dynamic and static stress, and pore pressure changes induced by a seismic event. Results show that although the response of the system is mainly dominated by characteristics of the radiated seismic waves, local structures can also play an important role on the system dynamics. The fluid reservoir affects the seismic wave front, distorts the static overpressure pattern induced by the earthquake, and concentrates the kinetic energy of the incoming wave on its boundaries. The static volumetric stress pattern inside the fault system is also affected by the local structures. Our results show that local faults play an important role in earthquake-volcanic systems dynamics by concentrating kinetic energy inside and acting as wave-guides that have a breakwater-like behavior. This generates sudden changes in pore pressure, volumetric expansion, and stress gradients. Local structures also influence the regional Coulomb yield function. Our results show that local structures affect the dynamics of volcanic and hydrothermal systems, and should be taken into account when investigating triggering of these systems from nearby or distant earthquakes.

  1. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  2. Simulation of a Dispersive Tsunami due to the 2016 El Salvador-Nicaragua Outer-Rise Earthquake (M w 6.9)

    Science.gov (United States)

    Tanioka, Yuichiro; Ramirez, Amilcar Geovanny Cabrera; Yamanaka, Yusuke

    2018-01-01

    The 2016 El Salvador-Nicaragua outer-rise earthquake (M w 6.9) generated a small tsunami observed at the ocean bottom pressure sensor, DART 32411, in the Pacific Ocean off Central America. The dispersive observed tsunami is well simulated using the linear Boussinesq equations. From the dispersive character of tsunami waveform, the fault length and width of the outer-rise event is estimated to be 30 and 15 km, respectively. The estimated seismic moment of 3.16 × 1019 Nm is the same as the estimation in the Global CMT catalog. The dispersive character of the tsunami in the deep ocean caused by the 2016 outer-rise El Salvador-Nicaragua earthquake could constrain the fault size and the slip amount or the seismic moment of the event.

  3. Simulation of a Dispersive Tsunami due to the 2016 El Salvador-Nicaragua Outer-Rise Earthquake ( M w 6.9)

    Science.gov (United States)

    Tanioka, Yuichiro; Ramirez, Amilcar Geovanny Cabrera; Yamanaka, Yusuke

    2018-04-01

    The 2016 El Salvador-Nicaragua outer-rise earthquake ( M w 6.9) generated a small tsunami observed at the ocean bottom pressure sensor, DART 32411, in the Pacific Ocean off Central America. The dispersive observed tsunami is well simulated using the linear Boussinesq equations. From the dispersive character of tsunami waveform, the fault length and width of the outer-rise event is estimated to be 30 and 15 km, respectively. The estimated seismic moment of 3.16 × 1019 Nm is the same as the estimation in the Global CMT catalog. The dispersive character of the tsunami in the deep ocean caused by the 2016 outer-rise El Salvador-Nicaragua earthquake could constrain the fault size and the slip amount or the seismic moment of the event.

  4. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  5. Building with Earthquakes in Mind

    Science.gov (United States)

    Mangieri, Nicholas

    2016-04-01

    Earthquakes are some of the most elusive and destructive disasters humans interact with on this planet. Engineering structures to withstand earthquake shaking is critical to ensure minimal loss of life and property. However, the majority of buildings today in non-traditional earthquake prone areas are not built to withstand this devastating force. Understanding basic earthquake engineering principles and the effect of limited resources helps students grasp the challenge that lies ahead. The solution can be found in retrofitting existing buildings with proper reinforcements and designs to deal with this deadly disaster. The students were challenged in this project to construct a basic structure, using limited resources, that could withstand a simulated tremor through the use of an earthquake shake table. Groups of students had to work together to creatively manage their resources and ideas to design the most feasible and realistic type of building. This activity provided a wealth of opportunities for the students to learn more about a type of disaster they do not experience in this part of the country. Due to the fact that most buildings in New York City were not designed to withstand earthquake shaking, the students were able to gain an appreciation for how difficult it would be to prepare every structure in the city for this type of event.

  6. Studies of the subsurface effects of earthquakes

    International Nuclear Information System (INIS)

    Marine, I.W.

    1980-01-01

    As part of the National Terminal Waste Storage Program, the Savannah River Laboratory is conducting a series of studies on the subsurface effects of earthquakes. This report summarizes three subcontracted studies. (1) Earthquake damage to underground facilities: the purpose of this study was to document damage and nondamage caused by earthquakes to tunnels and shallow underground openings; to mines and other deep openings; and to wells, shafts, and other vertical facilities. (2) Earthquake related displacement fields near underground facilities: the study included an analysis of block motion, an analysis of the dependence of displacement on the orientation and distance of joints from the earthquake source, and displacement related to distance and depth near a causative fault as a result of various shapes, depths, and senses of movement on the causative fault. (3) Numerical simulation of earthquake effects on tunnels for generic nuclear waste repositories: the objective of this study was to use numerical modeling to determine under what conditions seismic waves might cause instability of an underground opening or create fracturing that would increase the permeability of the rock mass

  7. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  8. Method to Determine Appropriate Source Models of Large Earthquakes Including Tsunami Earthquakes for Tsunami Early Warning in Central America

    Science.gov (United States)

    Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro

    2017-08-01

    Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.

  9. Characterization of Background Traffic in Hybrid Network Simulation

    National Research Council Canada - National Science Library

    Lauwens, Ben; Scheers, Bart; Van de Capelle, Antoine

    2006-01-01

    .... Two approaches are common: discrete event simulation and fluid approximation. A discrete event simulation generates a huge amount of events for a full-blown battlefield communication network resulting in a very long runtime...

  10. Towards an Earthquake and Tsunami Early Warning in the Caribbean

    Science.gov (United States)

    Huerfano Moreno, V. A.; Vanacore, E. A.

    2017-12-01

    The Caribbean region (CR) has a documented history of large damaging earthquakes and tsunamis that have affected coastal areas, including the events of Jamaica in 1692, Virgin Islands in 1867, Puerto Rico in 1918, the Dominican Republic in 1946 and Haiti in 2010. There is clear evidence that tsunamis have been triggered by large earthquakes that deformed the ocean floor around the Caribbean Plate boundary. The CR is monitored jointly by national/regional/local seismic, geodetic and sea level networks. All monitoring institutions are participating in the UNESCO ICG/Caribe EWS, the purpose of this initiative is to minimize loss of life and destruction of property, and to mitigate against catastrophic economic impacts via promoting local research, real time (RT) earthquake, geodetic and sea level data sharing and improving warning capabilities and enhancing education and outreach strategies. Currently more than, 100 broad-band seismic, 65 sea levels and 50 GPS high rate stations are available in real or near real-time. These real-time streams are used by Local/Regional or Worldwide detection and warning institutions to provide earthquake source parameters in a timely manner. Currently, any Caribbean event detected to have a magnitude greater than 4.5 is evaluated, and sea level is measured, by the TWC for tsumanigenic potential. The regional cooperation is motivated both by research interests as well as geodetic, seismic and tsunami hazard monitoring and warning. It will allow the imaging of the tectonic structure of the Caribbean region to a high resolution which will consequently permit further understanding of the seismic source properties for moderate and large events and the application of this knowledge to procedures of civil protection. To reach its goals, the virtual network has been designed following the highest technical standards: BB sensors, 24 bits A/D converters with 140 dB dynamic range, real-time telemetry. Here we will discuss the state of the PR

  11. Tsunami simulation method initiated from waveforms observed by ocean bottom pressure sensors for real-time tsunami forecast; Applied for 2011 Tohoku Tsunami

    Science.gov (United States)

    Tanioka, Yuichiro

    2017-04-01

    After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami

  12. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  13. Strong-motion observations of the M 7.8 Gorkha, Nepal, earthquake sequence and development of the N-shake strong-motion network

    Science.gov (United States)

    Dixit, Amod; Ringler, Adam; Sumy, Danielle F.; Cochran, Elizabeth S.; Hough, Susan E.; Martin, Stacey; Gibbons, Steven; Luetgert, James H.; Galetzka, John; Shrestha, Surya; Rajaure, Sudhir; McNamara, Daniel E.

    2015-01-01

    We present and describe strong-motion data observations from the 2015 M 7.8 Gorkha, Nepal, earthquake sequence collected using existing and new Quake-Catcher Network (QCN) and U.S. Geological Survey NetQuakes sensors located in the Kathmandu Valley. A comparison of QCN data with waveforms recorded by a conventional strong-motion (NetQuakes) instrument validates the QCN data. We present preliminary analysis of spectral accelerations, and peak ground acceleration and velocity for earthquakes up to M 7.3 from the QCN stations, as well as preliminary analysis of the mainshock recording from the NetQuakes station. We show that mainshock peak accelerations were lower than expected and conclude the Kathmandu Valley experienced a pervasively nonlinear response during the mainshock. Phase picks from the QCN and NetQuakes data are also used to improve aftershock locations. This study confirms the utility of QCN instruments to contribute to ground-motion investigations and aftershock response in regions where conventional instrumentation and open-access seismic data are limited. Initial pilot installations of QCN instruments in 2014 are now being expanded to create the Nepal–Shaking Hazard Assessment for Kathmandu and its Environment (N-SHAKE) network.

  14. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  15. Parallel 3D Simulation of Seismic Wave Propagation in the Structure of Nobi Plain, Central Japan

    Science.gov (United States)

    Kotani, A.; Furumura, T.; Hirahara, K.

    2003-12-01

    We performed large-scale parallel simulations of the seismic wave propagation to understand the complex wave behavior in the 3D basin structure of the Nobi Plain, which is one of the high population cities in central Japan. In this area, many large earthquakes occurred in the past, such as the 1891 Nobi earthquake (M8.0), the 1944 Tonankai earthquake (M7.9) and the 1945 Mikawa earthquake (M6.8). In order to mitigate the potential disasters for future earthquakes, 3D subsurface structure of Nobi Plain has recently been investigated by local governments. We referred to this model together with bouguer anomaly data to construct a detail 3D basin structure model for Nobi plain, and conducted computer simulations of ground motions. We first evaluated the ground motions for two small earthquakes (M4~5); one occurred just beneath the basin edge at west, and the other occurred at south. The ground motions from these earthquakes were well recorded by the strong motion networks; K-net, Kik-net, and seismic intensity instruments operated by local governments. We compare the observed seismograms with simulations to validate the 3D model. For the 3D simulation we sliced the 3D model into a number of layers to assign to many processors for concurrent computing. The equation of motions are solved using a high order (32nd) staggered-grid FDM in horizontal directions, and a conventional (4th-order) FDM in vertical direction with the MPI inter-processor communications between neighbor region. The simulation model is 128km by 128km by 43km, which is discritized at variable grid size of 62.5-125m in horizontal directions and of 31.25-62.5m in vertical direction. We assigned a minimum shear wave velocity is Vs=0.4km/s, at the top of the sedimentary basin. The seismic sources for the small events are approximated by double-couple point source and we simulate the seismic wave propagation at maximum frequency of 2Hz. We used the Earth Simulator (JAMSTEC, Yokohama Inst) to conduct such

  16. Dynamic simulation of a steam generator by neural networks

    International Nuclear Information System (INIS)

    Masini, R.; Padovani, E.; Ricotti, M.E.; Zio, E.

    1999-01-01

    Numerical simulation by computers of the dynamic evolution of complex systems and components is a fundamental phase of any modern engineering design activity. This is of particular importance for risk-based design projects which require that the system behavior be analyzed under several and often extreme conditions. The traditional methods of simulation typically entail long, iterative, processes which lead to large simulation times, often exceeding the transients real time. Artificial neural networks (ANNs) may be exploited in this context, their advantages residing mainly in the speed of computation, in the capability of generalizing from few examples, in the robustness to noisy and partially incomplete data and in the capability of performing empirical input-output mapping without complete knowledge of the underlying physics. In this paper we present a novel approach to dynamic simulation by ANNs based on a superposition scheme in which a set of networks are individually trained, each one to respond to a different input forcing function. The dynamic simulation of a steam generator is considered as an example to show the potentialities of this tool and to point out the difficulties and crucial issues which typically arise when attempting to establish an efficient neural network simulator. The structure of the networks system is such to feedback, at each time step, a portion of the past evolution of the transient and this allows a good reproduction of also non-linear dynamic behaviors. A nice characteristic of the approach is that the modularization of the training reduces substantially its burden and gives this neural simulation tool a nice feature of transportability. (orig.)

  17. How to build and teach with QuakeCaster: an earthquake demonstration and exploration tool

    Science.gov (United States)

    Linton, Kelsey; Stein, Ross S.

    2015-01-01

    QuakeCaster is an interactive, hands-on teaching model that simulates earthquakes and their interactions along a plate-boundary fault. QuakeCaster contains the minimum number of physical processes needed to demonstrate most observable earthquake features. A winch to steadily reel in a line simulates the steady plate tectonic motions far from the plate boundaries. A granite slider in frictional contact with a nonskid rock-like surface simulates a fault at a plate boundary. A rubber band connecting the line to the slider simulates the elastic character of the Earth’s crust. By stacking and unstacking sliders and cranking in the winch, one can see the results of changing the shear stress and the clamping stress on a fault. By placing sliders in series with rubber bands between them, one can simulate the interaction of earthquakes along a fault, such as cascading or toggling shocks. By inserting a load scale into the line, one can measure the stress acting on the fault throughout the earthquake cycle. As observed for real earthquakes, QuakeCaster events are not periodic, time-predictable, or slip-predictable. QuakeCaster produces rare but unreliable “foreshocks.” When fault gouge builds up, the friction goes to zero and fault creep is seen without large quakes. QuakeCaster events produce very small amounts of fault gouge that strongly alter its behavior, resulting in smaller, more frequent shocks as the gouge accumulates. QuakeCaster is designed so that students or audience members can operate it and record its output. With a stopwatch and ruler one can measure and plot the timing, slip distance, and force results of simulated earthquakes. People of all ages can use the QuakeCaster model to explore hypotheses about earthquake occurrence. QuakeCaster takes several days and about $500.00 in materials to build.

  18. Performance of Real-time Earthquake Information System in Japan

    Science.gov (United States)

    Nakamura, H.; Horiuchi, S.; Wu, C.; Yamamoto, S.

    2008-12-01

    Horiuchi et al. (2005) developed a real-time earthquake information system (REIS) using Hi-net, a densely deployed nationwide seismic network, which consists of about 800 stations operated by NIED, Japan. REIS determines hypocenter locations and earthquake magnitudes automatically within a few seconds after P waves arrive at the closest station and calculates focal mechanisms within about 15 seconds. Obtained hypocenter parameters are transferred immediately by using XML format to a computer in Japan Meteorological Agency (JMA), who started the service of EEW to special users in June 2005. JMA also developed EEW using 200 stations. The results by the two systems are merged. Among all the first issued EEW reports by both systems, REIS information accounts for about 80 percent. This study examines the rapidity and credibility of REIS by analyzing the 4050 earthquakes which occurred around the Japan Islands since 2005 with magnitude larger than 3.0. REIS re-determines hypocenter parameters every one second according to the revision of waveform data. Here, we discuss only about the results by the first reports. On rapidness, our results show that about 44 percent of the first reports are issued within 5 seconds after the P waves arrives at the closest stations. Note that this 5-second time window includes time delay due to data package and transmission delay of about 2 seconds. REIS waits till two stations detect P waves for events in the network but four stations outside the network so as to get reliable solutions. For earthquakes with hypocentral distance less than 100km, 55 percent of earthquakes are warned in 5 seconds and 87 percent are warned in 10 seconds. Most of events having long time delay are small and triggered by S wave arrivals. About 80 percent of events have difference in epicenter distances less than 20km relative to JMA manually determined locations. Because of the existence of large lateral heterogeneity in seismic velocity, the difference depends

  19. The Colombia Seismological Network

    Science.gov (United States)

    Blanco Chia, J. F.; Poveda, E.; Pedraza, P.

    2013-05-01

    The latest seismological equipment and data processing instrumentation installed at the Colombia Seismological Network (RSNC) are described. System configuration, network operation, and data management are discussed. The data quality and the new seismological products are analyzed. The main purpose of the network is to monitor local seismicity with a special emphasis on seismic activity surrounding the Colombian Pacific and Caribbean oceans, for early warning in case a Tsunami is produced by an earthquake. The Colombian territory is located at the South America northwestern corner, here three tectonic plates converge: Nazca, Caribbean and the South American. The dynamics of these plates, when resulting in earthquakes, is continuously monitored by the network. In 2012, the RSNC registered in 2012 an average of 67 events per day; from this number, a mean of 36 earthquakes were possible to be located well. In 2010 the network was also able to register an average of 67 events, but it was only possible to locate a mean of 28 earthquakes daily. This difference is due to the expansion of the network. The network is made up of 84 stations equipped with different kind of broadband 40s, 120s seismometers, accelerometers and short period 1s sensors. The signal is transmitted continuously in real-time to the Central Recording Center located at Bogotá, using satellite, telemetry, and Internet. Moreover, there are some other stations which are required to collect the information in situ. Data is recorded and processed digitally using two different systems, EARTHWORM and SEISAN, which are able to process and share the information between them. The RSNC has designed and implemented a web system to share the seismological data. This innovative system uses tools like Java Script, Oracle and programming languages like PHP to allow the users to access the seismicity registered by the network almost in real time as well as to download the waveform and technical details. The coverage

  20. A way to synchronize models with seismic faults for earthquake forecasting

    DEFF Research Database (Denmark)

    González, Á.; Gómez, J.B.; Vázquez-Prada, M.

    2006-01-01

    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual....... Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault...... models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized...

  1. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  2. Simulation of Attacks for Security in Wireless Sensor Network

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  3. A Bayesian Approach to Real-Time Earthquake Phase Association

    Science.gov (United States)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  4. Electrostatically actuated resonant switches for earthquake detection

    KAUST Repository

    Ramini, Abdallah H.; Masri, Karim M.; Younis, Mohammad I.

    2013-01-01

    action can be functionalized for useful functionalities, such as shutting off gas pipelines in the case of earthquakes, or can be used to activate a network of sensors for seismic activity recording in health monitoring applications. By placing a

  5. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available these events and their effects can enable researchers to identify these threats and find ways to counter them. In this paper we present the design of a network simulation platform which can enable researchers to study dynamic behaviour of networks, network...

  6. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  7. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  8. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    Directory of Open Access Journals (Sweden)

    Diogo Santos

    2016-06-01

    Full Text Available Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particularly for the case of the Porto urban network, in Portugal. With this goal in mind, a comparison between different altitude information retrieval approaches is made and a simple tool to annotate network models with altitude data is proposed. The work starts by describing the methodological approach followed during research and development, then describing and analysing its main findings. This description includes an in-depth explanation of the proposed tool. Lastly, this work reviews some related work to the subject.

  9. Real-time earthquake monitoring: Early warning and rapid response

    Science.gov (United States)

    1991-01-01

    A panel was established to investigate the subject of real-time earthquake monitoring (RTEM) and suggest recommendations on the feasibility of using a real-time earthquake warning system to mitigate earthquake damage in regions of the United States. The findings of the investigation and the related recommendations are described in this report. A brief review of existing real-time seismic systems is presented with particular emphasis given to the current California seismic networks. Specific applications of a real-time monitoring system are discussed along with issues related to system deployment and technical feasibility. In addition, several non-technical considerations are addressed including cost-benefit analysis, public perceptions, safety, and liability.

  10. Mental health in L'Aquila after the earthquake

    Directory of Open Access Journals (Sweden)

    Paolo Stratta

    2012-06-01

    Full Text Available INTRODUCTION: In the present work we describe the mental health condition of L'Aquila population in the aftermath of the earthquake in terms of structural, process and outcome perspectives. METHOD: Literature revision of the published reports on the L'Aquila earthquake has been performed. RESULTS: Although important psychological distress has been reported by the population, capacity of resilience can be observed. However if resilient mechanisms intervened in immediate aftermath of the earthquake, important dangers are conceivable in the current medium-long-term perspective due to the long-lasting alterations of day-to-day life and the disruption of social networks that can be well associated with mental health problems. CONCLUSIONS: In a condition such as an earthquake, the immediate physical, medical, and emergency rescue needs must be addressed initially. However training first responders to identify psychological distress symptoms would be important for mental health triage in the field.

  11. 'BioNessie(G) - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X; Jiang, J; Ajayi, O; Gu, X; Gilbert, D; Sinnott, R

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scal...

  12. The 2015 Gorkha earthquake investigated from radar satellites: Slip and stress modeling along the MHT

    Directory of Open Access Journals (Sweden)

    Faqi eDiao

    2015-10-01

    Full Text Available The active collision at the Himalayas combines crustal shortening and thickening, associated with the development of hazardous seismogenic faults. The 2015 Kathmandu earthquake largely affected Kathmandu city and partially ruptured a previously identified seismic gap. With a magnitude of Mw 7.8 as determined by the GEOFON seismic network, the 25 April 2015 earthquake displays uplift of the Kathmandu basin constrained by interferometrically processed ALOS-2, RADARSAT-2 and Sentinel-1 satellite radar data. An area of about 7,000 km² in the basin showed ground uplift locally exceeding 2 m, and a similarly large area (approx. 9000 km2 showed subsidence in the north, both of which could be simulated with a fault that is localized beneath the Kathmandu basin at a shallow depth of 5-15 km. Coulomb stress calculations reveal that the same fault adjacent to the Kathmandu basin experienced stress increase, similar as at sub-parallel faults of the thin skinned nappes, exactly at the location where the largest aftershock occurred (Mw 7.3 on 12. May, 2015. Therefore this study provides insights into the shortening and uplift tectonics of the Himalayas and shows the stress redistribution associated with the earthquake.

  13. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  14. Neural Network Emulation of Reionization Simulations

    Science.gov (United States)

    Schmit, Claude J.; Pritchard, Jonathan R.

    2018-05-01

    Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. One of the major challenges for these experiments will be dealing with enormous incoming data volumes. Machine learning is key to increasing our data analysis efficiency. We consider the use of an artificial neural network to emulate 21cmFAST simulations and use it in a Bayesian parameter inference study. We then compare the network predictions to a direct evaluation of the EoR simulations and analyse the dependence of the results on the training set size. We find that the use of a training set of size 100 samples can recover the error contours of a full scale MCMC analysis which evaluates the model at each step.

  15. Napa earthquake: An earthquake in a highly connected world

    Science.gov (United States)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.

    2014-12-01

    The Napa earthquake recently occurred close to Silicon Valley. This makes it a good candidate to study what social networks, wearable objects and website traffic analysis (flashsourcing) can tell us about the way eyewitnesses react to ground shaking. In the first part, we compare the ratio of people publishing tweets and with the ratio of people visiting EMSC (European Mediterranean Seismological Centre) real time information website in the first minutes following the earthquake occurrence to the results published by Jawbone, which show that the proportion of people waking up depends (naturally) on the epicentral distance. The key question to evaluate is whether the proportions of inhabitants tweeting or visiting the EMSC website are similar to the proportion of people waking up as shown by the Jawbone data. If so, this supports the premise that all methods provide a reliable image of the relative ratio of people waking up. The second part of the study focuses on the reaction time for both Twitter and EMSC website access. We show, similarly to what was demonstrated for the Mineral, Virginia, earthquake (Bossu et al., 2014), that hit times on the EMSC website follow the propagation of the P waves and that 2 minutes of website traffic is sufficient to determine the epicentral location of an earthquake on the other side of the Atlantic. We also compare with the publication time of messages on Twitter. Finally, we check whether the number of tweets and the number of visitors relative to the number of inhabitants is correlated to the local level of shaking. Together these results will tell us whether the reaction of eyewitnesses to ground shaking as observed through Twitter and the EMSC website analysis is tool specific (i.e. specific to Twitter or EMSC website) or whether they do reflect people's actual reactions.

  16. Induced seismicity provides insight into why earthquake ruptures stop

    KAUST Repository

    Galis, Martin

    2017-12-21

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.

  17. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  18. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  19. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  20. Seismic Hazard Analysis on a Complex, Interconnected Fault Network

    Science.gov (United States)

    Page, M. T.; Field, E. H.; Milner, K. R.

    2017-12-01

    In California, seismic hazard models have evolved from simple, segmented prescriptive models to much more complex representations of multi-fault and multi-segment earthquakes on an interconnected fault network. During the development of the 3rd Uniform California Earthquake Rupture Forecast (UCERF3), the prevalence of multi-fault ruptures in the modeling was controversial. Yet recent earthquakes, for example, the Kaikora earthquake - as well as new research on the potential of multi-fault ruptures (e.g., Nissen et al., 2016; Sahakian et al. 2017) - have validated this approach. For large crustal earthquakes, multi-fault ruptures may be the norm rather than the exception. As datasets improve and we can view the rupture process at a finer scale, the interconnected, fractal nature of faults is revealed even by individual earthquakes. What is the proper way to model earthquakes on a fractal fault network? We show multiple lines of evidence that connectivity even in modern models such as UCERF3 may be underestimated, although clustering in UCERF3 mitigates some modeling simplifications. We need a methodology that can be applied equally well where the fault network is well-mapped and where it is not - an extendable methodology that allows us to "fill in" gaps in the fault network and in our knowledge.

  1. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  2. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    Science.gov (United States)

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  3. Characterization of the Virginia earthquake effects and source parameters from website traffic analysis

    Science.gov (United States)

    Bossu, R.; Lefebvre, S.; Mazet-Roux, G.; Roussel, F.

    2012-12-01

    This paper presents an after the fact study of the Virginia earthquake of 2011 August 23 using only the traffic observed on the EMSC website within minutes of its occurrence. Although the EMSC real time information services remain poorly identified in the US, a traffic surge was observed immediately after the earthquake's occurrence. Such surges, known as flashcrowd and commonly observed on our website after felt events within the Euro-Med region are caused by eyewitnesses looking for information about the shaking they have just felt. EMSC developed an approach named flashsourcing to map the felt area, and in some circumstances, the regions affected by severe damage or network disruption. The felt area is mapped simply by locating the Internet Protocol (IP) addresses of the visitors to the website during these surges while the existence of network disruption is detected by the instantaneous loss at the time of earthquake's occurrence of existing Internet sessions originating from the impacted area. For the Virginia earthquake, which was felt at large distances, the effects of the waves propagation are clearly observed. We show that the visits to our website are triggered by the P waves arrival: the first visitors from a given locality reach our website 90s after their location was shaken by the P waves. From a processing point of view, eyewitnesses can then be considered as ground motion detectors. By doing so, the epicentral location is determined through a simple dedicated location algorithm within 2 min of the earthquake's occurrence and 30 km accuracy. The magnitude can be estimated in similar time frame by using existing empirical relationships between the surface of the felt area and the magnitude. Concerning the effects of the earthquake, we check whether one can discriminate localities affected by strong shaking from web traffic analysis. This is actually the case. Localities affected by strong level of shaking exhibit higher ratio of visitors to the number

  4. Effect of heterogeneities on evaluating earthquake triggering of volcanic eruptions

    Directory of Open Access Journals (Sweden)

    J. Takekawa

    2013-02-01

    Full Text Available Recent researches have indicated coupling between volcanic eruptions and earthquakes. Some of them calculated static stress transfer in subsurface induced by the occurrences of earthquakes. Most of their analyses ignored the spatial heterogeneity in subsurface, or only took into account the rigidity layering in the crust. On the other hand, a smaller scale heterogeneity of around hundreds of meters has been suggested by geophysical investigations. It is difficult to reflect that kind of heterogeneity in analysis models because accurate distributions of fluctuation are not well understood in many cases. Thus, the effect of the ignorance of the smaller scale heterogeneity on evaluating the earthquake triggering of volcanic eruptions is also not well understood. In the present study, we investigate the influence of the assumption of homogeneity on evaluating earthquake triggering of volcanic eruptions using finite element simulations. The crust is treated as a stochastic media with different heterogeneous parameters (correlation length and magnitude of velocity perturbation in our simulations. We adopt exponential and von Karman functions as spatial auto-correlation functions (ACF. In all our simulation results, the ignorance of the smaller scale heterogeneity leads to underestimation of the failure pressure around a chamber wall, which relates to dyke initiation. The magnitude of the velocity perturbation has a larger effect on the tensile failure at the chamber wall than the difference of the ACF and the correlation length. The maximum effect on the failure pressure in all our simulations is about twice larger than that in the homogeneous case. This indicates that the estimation of the earthquake triggering due to static stress transfer should take account of the heterogeneity of around hundreds of meters.

  5. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  6. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  7. Introduction: seismology and earthquake engineering in Mexico and Central and South America.

    Science.gov (United States)

    Espinosa, A.F.

    1982-01-01

    The results from seismological studies that are used by the engineering community are just one of the benefits obtained from research aimed at mitigating the earthquake hazard. In this issue of Earthquake Information Bulletin current programs in seismology and earthquake engineering, seismic networks, future plans and some of the cooperative programs with different internation organizations are described by Latin-American seismologists. The article describes the development of seismology in Latin America and the seismological interest of the OAS. -P.N.Chroston

  8. Cloud-based systems for monitoring earthquakes and other environmental quantities

    Science.gov (United States)

    Clayton, R. W.; Olson, M.; Liu, A.; Chandy, M.; Bunn, J.; Guy, R.

    2013-12-01

    There are many advantages to using a cloud-based system to record and analyze environmental quantities such as earthquakes, radiation, various gases, dust and meteorological parameters. These advantages include robustness and dynamic scalability, and also reduced costs. In this paper, we present our experiences over the last three years in developing a cloud-based earthquake monitoring system (the Community Seismic Network). This network consists of over 600 sensors (accelerometers) in the S. California region that send data directly to the Google App Engine where they are analyzed. The system is capable of handing many other types of sensor data and generating a situation-awareness analysis as a product. Other advantages to the cloud-based system are integration with other peer networks, and being able to deploy anywhere in the world without have to build addition computing infrastructure.

  9. Evaluation of stability of foundation ground during earthquake, 6

    International Nuclear Information System (INIS)

    Kanatani, Mamoru; Nishi, Koichi

    1988-01-01

    The aseismatic capability of nuclear power plants located on Quaternary grounds, which consist of dense sand or sandy gravel, is heavily dependent on the stability of foundation grounds during earthquakes. In order to investigate into the stability of ground more in detail, it is necessary to develop the nonlinear earthquake response analysis method which can simulate the inelastic behavior of soil. In this report, the newly developed nonlinear response analysis method based on the effective stress, the results of simulation using the results of vibration table test and centrifuge test, and the case studies on two-dimensional soil-structure interaction problems are described. Soil was regarded as the two-phase mixture composed of soil particle skeleton and pore water. In the equation of motion taking their interaction into account, the elastoplastic constitutive equation that can simulate the inelastic deformation behavior of soil at the time of repeated shearing in two or three-dimensional field was introduced, and the analysis code which successively traces the behavior of ground at the time of earthquakes using FEM was developed. (K.I.)

  10. Large scale earthquake simulator of 3-D (simultaneous X-Y-Z direction)

    International Nuclear Information System (INIS)

    Shiraki, Kazuhiro; Inoue, Masao

    1983-01-01

    Japan is the country where earthquakes are frequent, accordingly it is necessary to examine sufficiently the safety against earthquakes of important machinery and equipment such as nuclear and thermal power plants and chemical plants. For this purpose, aseismatic safety is evaluated by mounting an actual thing or a model on a vibration table and vibrating it by the magnitude several times as large as actual earthquakes. The vibration tables used so far can vibrate only in one direction or in two directions simultaneously, but this time, a three-dimensional vibration table was completed, which can vibrate in three directions simultaneously with arbitrary wave forms, respectively. By the advent of this vibration table, aseismatic test can be carried out, using the earthquake waves close to actual ones. It is expected that this vibration table achieves large role for the improvement of aseismatic reliability of nuclear power machinery and equipment. When a large test body is vibrated on the vibration table, the center of gravity of the test body and the point of action of vibrating force are different, therefore the rotating motion around three axes is added to the motion in three axial directions, and these motions must be controlled so as to realize three-dimensional earthquake motion. The main particulars and the construction of the vibration table, the mechanism of three-direction vibration, the control of the table and the results of test of the table are reported. (Kako, I.)

  11. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  12. Surface Rupture Effects on Earthquake Moment-Area Scaling Relations

    Science.gov (United States)

    Luo, Yingdi; Ampuero, Jean-Paul; Miyakoshi, Ken; Irikura, Kojiro

    2017-09-01

    Empirical earthquake scaling relations play a central role in fundamental studies of earthquake physics and in current practice of earthquake hazard assessment, and are being refined by advances in earthquake source analysis. A scaling relation between seismic moment ( M 0) and rupture area ( A) currently in use for ground motion prediction in Japan features a transition regime of the form M 0- A 2, between the well-recognized small (self-similar) and very large (W-model) earthquake regimes, which has counter-intuitive attributes and uncertain theoretical underpinnings. Here, we investigate the mechanical origin of this transition regime via earthquake cycle simulations, analytical dislocation models and numerical crack models on strike-slip faults. We find that, even if stress drop is assumed constant, the properties of the transition regime are controlled by surface rupture effects, comprising an effective rupture elongation along-dip due to a mirror effect and systematic changes of the shape factor relating slip to stress drop. Based on this physical insight, we propose a simplified formula to account for these effects in M 0- A scaling relations for strike-slip earthquakes.

  13. Failures and suggestions in Earthquake forecasting and prediction

    Science.gov (United States)

    Sacks, S. I.

    2013-12-01

    Seismologists have had poor success in earthquake prediction. However, wide ranging observations from earlier great earthquakes show that precursory data can exist. In particular, two aspects seem promising. In agreement with simple physical modeling, b-values decrease in highly loaded fault zones for years before failure. Potentially more usefully, in high stress regions, breakdown of dilatant patches leading to failure can yield expelled water-related observations. The volume increase (dilatancy) caused by high shear stresses decreases the pore pressure. Eventually, water flows back in restoring the pore pressure, promoting failure and expelling the extra water. Of course, in a generally stressed region there may be many small patches that fail, such as observed before the 1975 Haicheng earthquake. Only a few days before the major event will most of the dilatancy breakdown occur in the fault zone itself such as for the Tangshan, 1976 destructive event. Observations of 'water release' effects have been observed before the 1923 great Kanto earthquake, the 1984 Yamasaki event, the 1975 Haicheng and the 1976 Tangshan earthquakes and also the 1995 Kobe earthquake. While there are obvious difficulties in water release observations, not least because there is currently no observational network anywhere, historical data does suggest some promise if we broaden our approach to this difficult subject.

  14. ns-2 extension to simulate localization system in wireless sensor networks

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2011-09-01

    Full Text Available The ns-2 network simulator is one of the most widely used tools by researchers to investigate the characteristics of wireless sensor networks. Academic papers focus on results and rarely include details of how ns-2 simulations are implemented...

  15. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  16. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  17. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  18. Update earthquake risk assessment in Cairo, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  19. XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network

    Science.gov (United States)

    Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.

    2007-01-01

    In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.

  20. Ground Motions Simulations and Site Effects in the Quito Basin (Ecuador)

    Science.gov (United States)

    Courboulex, F.; Castro-Cruz, D.; Laurendeau, A.; Bonilla, L. F.; Bertrand, E.; Mercerat, D.; Alvarado, A. P.

    2017-12-01

    The city of Quito (3M inhabitants), capital of Ecuador has been damaged several times in the past by large earthquakes. It is built on the hanging-wall of an active reverse fault, constituting a piggy-back basin. The deep structure of this basin and its seismic response remains badly known. We first use the recordings of 170 events on 18 accelerometers from the Quito permanent network and perform spectral ratio analysis. We find that the southern part of Quito shows strong site amplification at low frequency ( 0.35 Hz). Yet, high frequency ( 5 Hz) amplifications also exist, but exhibit a complex spatial variability. We then propose a new calibrated method based on empirical Green's functions (EGF) to simulate the ground motions due to a future earthquake in Quito. The idea is to use the results of a global database of source time functions (i.e., the SCARDEC database, Vallée and Douet, 2016; Courboulex et al., 2016) to define the average values and the variability of the stress-drop ratio parameter, which strongly affects the resulting simulations. We test the method on a Mw 7.8 event, similar in location and focal mechanism to the Pedernales earthquake that occurred on April 16th 2016 on the subduction zone. For this aim, we use the recordings of 6 aftershocks of magnitude 5.6 to 6.2 as EGF's. The predicted Fourier spectra, peak values and response spectra we obtain are in good agreement with real data from the 2016 event recorded on the Quito network. With the constraints we impose on stress-drop ratios, we expect that the simulated ground motions be representative of the variability of other Pedernales-type events that could occur in the future. Our results also well reproduce the low frequency site effects amplification in the south of the basin. This amplification could be particularly dangerous in the case of a mega subduction earthquake, like the one that struck Ecuador in 1906.

  1. Coarse-grained simulation of a real-time process control network under peak load

    International Nuclear Information System (INIS)

    George, A.D.; Clapp, N.E. Jr.

    1992-01-01

    This paper presents a simulation study on the real-time process control network proposed for the new ANS reactor system at ORNL. A background discussion is provided on networks, modeling, and simulation, followed by an overview of the ANS process control network, its three peak-load models, and the results of a series of coarse-grained simulation studies carried out on these models using implementations of 802.3, 802.4, and 802.5 standard local area networks

  2. The Lushan earthquake and the giant panda: impacts and conservation.

    Science.gov (United States)

    Zhang, Zejun; Yuan, Shibin; Qi, Dunwu; Zhang, Mingchun

    2014-06-01

    Earthquakes not only result in a great loss of human life and property, but also have profound effects on the Earth's biodiversity. The Lushan earthquake occurred on 20 Apr 2013, with a magnitude of 7.0 and an intensity of 9.0 degrees. A distance of 17.0 km from its epicenter to the nearest distribution site of giant pandas recorded in the Third National Survey was determined. Making use of research on the Wenchuan earthquake (with a magnitude of 8.0), which occurred approximately 5 years ago, we briefly analyze the impacts of the Lushan earthquake on giant pandas and their habitat. An earthquake may interrupt ongoing behaviors of giant pandas and may also cause injury or death. In addition, an earthquake can damage conservation facilities for pandas, and result in further habitat fragmentation and degradation. However, from a historical point of view, the impacts of human activities on giant pandas and their habitat may, in fact, far outweigh those of natural disasters such as earthquakes. Measures taken to promote habitat restoration and conservation network reconstruction in earthquake-affected areas should be based on requirements of giant pandas, not those of humans. © 2013 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  3. Detection and Mapping of the September 2017 Mexico Earthquakes Using DAS Fiber-Optic Infrastructure Arrays

    Science.gov (United States)

    Karrenbach, M. H.; Cole, S.; Williams, J. J.; Biondi, B. C.; McMurtry, T.; Martin, E. R.; Yuan, S.

    2017-12-01

    Fiber-optic distributed acoustic sensing (DAS) uses conventional telecom fibers for a wide variety of monitoring purposes. Fiber-optic arrays can be located along pipelines for leak detection; along borders and perimeters to detect and locate intruders, or along railways and roadways to monitor traffic and identify and manage incidents. DAS can also be used to monitor oil and gas reservoirs and to detect earthquakes. Because thousands of such arrays are deployed worldwide and acquiring data continuously, they can be a valuable source of data for earthquake detection and location, and could potentially provide important information to earthquake early-warning systems. In this presentation, we show that DAS arrays in Mexico and the United States detected the M8.1 and M7.2 Mexico earthquakes in September 2017. At Stanford University, we have deployed a 2.4 km fiber-optic DAS array in a figure-eight pattern, with 600 channels spaced 4 meters apart. Data have been recorded continuously since September 2016. Over 800 earthquakes from across California have been detected and catalogued. Distant teleseismic events have also been recorded, including the two Mexican earthquakes. In Mexico, fiber-optic arrays attached to pipelines also detected these two events. Because of the length of these arrays and their proximity to the event locations, we can not only detect the earthquakes but also make location estimates, potentially in near real time. In this presentation, we review the data recorded for these two events recorded at Stanford and in Mexico. We compare the waveforms recorded by the DAS arrays to those recorded by traditional earthquake sensor networks. Using the wide coverage provided by the pipeline arrays, we estimate the event locations. Such fiber-optic DAS networks can potentially play a role in earthquake early-warning systems, allowing actions to be taken to minimize the impact of an earthquake on critical infrastructure components. While many such fiber

  4. Application of High Performance Computing to Earthquake Hazard and Disaster Estimation in Urban Area

    Directory of Open Access Journals (Sweden)

    Muneo Hori

    2018-02-01

    Full Text Available Integrated earthquake simulation (IES is a seamless simulation of analyzing all processes of earthquake hazard and disaster. There are two difficulties in carrying out IES, namely, the requirement of large-scale computation and the requirement of numerous analysis models for structures in an urban area, and they are solved by taking advantage of high performance computing (HPC and by developing a system of automated model construction. HPC is a key element in developing IES, as it needs to analyze wave propagation and amplification processes in an underground structure; a model of high fidelity for the underground structure exceeds a degree-of-freedom larger than 100 billion. Examples of IES for Tokyo Metropolis are presented; the numerical computation is made by using K computer, the supercomputer of Japan. The estimation of earthquake hazard and disaster for a given earthquake scenario is made by the ground motion simulation and the urban area seismic response simulation, respectively, for the target area of 10,000 m × 10,000 m.

  5. The Italian Project S2 - Task 4:Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Stupazzini, M.; Smerzini, C.; Cauzzi, C.; Faccioli, E.; Galadini, F.; Gori, S.

    2009-04-01

    Recently the Italian Department of Civil Protection (DPC), in cooperation with Istituto Nazionale di Geofisica e Vulcanologia (INGV) has promoted the 'S2' research project (http://nuovoprogettoesse2.stru.polimi.it/) aimed at the design, testing and application of an open-source code for seismic hazard assessment (SHA). The tool envisaged will likely differ in several important respects from an existing international initiative (Open SHA, Field et al., 2003). In particular, while "the OpenSHA collaboration model envisions scientists developing their own attenuation relationships and earthquake rupture forecasts, which they will deploy and maintain in their own systems", the main purpose of S2 project is to provide a flexible computational tool for SHA, primarily suited for the needs of DPC, which not necessarily are scientific needs. Within S2, a crucial issue is to make alternative approaches available to quantify the ground motion, with emphasis on the near field region. The SHA architecture envisaged will allow for the use of ground motion descriptions other than those yielded by empirical attenuation equations, for instance user generated motions provided by deterministic source and wave propagation simulations. In this contribution, after a brief presentation of Project S2, we intend to illustrate some preliminary 3D scenario simulations performed in the alluvial basin of Sulmona (Central Italy), as an example of the type of descriptions that can be handled in the future SHA architecture. In detail, we selected some seismogenic sources (from the DISS database), believed to be responsible for a number of destructive historical earthquakes, and derive from them a family of simplified geometrical and mechanical source models spanning across a reasonable range of parameters, so that the extent of the main uncertainties can be covered. Then, purely deterministic (for frequencies Journal of Seismology, 1, 237-251. Field, E.H., T.H. Jordan, and C.A. Cornell (2003

  6. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  7. 3D Ground-Motion Simulations for Magnitude 9 Earthquakes on the Cascadia Megathrust: Sedimentary Basin Amplification, Rupture Directivity, and Ground-Motion Variability

    Science.gov (United States)

    Frankel, A. D.; Wirth, E. A.; Marafi, N.; Vidale, J. E.; Stephenson, W. J.

    2017-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for Mw 9 earthquakes on the Cascadia subduction zone by combining synthetics from 3D finite-difference simulations at low frequencies (≤ 1 Hz) and stochastic synthetics at high frequencies (≥ 1 Hz). These synthetic ground motions are being used to evaluate building response, liquefaction, and landslides, as part of the M9 Project of the University of Washington, in collaboration with the U.S. Geological Survey. The kinematic rupture model is composed of high stress drop sub-events with Mw 8, similar to those observed in the Mw 9.0 Tohoku, Japan and Mw 8.8 Maule, Chile earthquakes, superimposed on large background slip with lower slip velocities. The 3D velocity model is based on active and passive-source seismic tomography studies, seismic refraction and reflection surveys, and geologic constraints. The Seattle basin portion of the model has been validated by simulating ground motions from local earthquakes. We have completed 50 3D simulations of Mw 9 earthquakes using a variety of hypocenters, slip distributions, sub-event locations, down-dip limits of rupture, and other parameters. For sites not in deep sedimentary basins, the response spectra of the synthetics for 0.1-6.0 s are similar, on average, to the values from the BC Hydro ground motion prediction equations (GMPE). For periods of 7-10 s, the synthetic response spectra exceed these GMPE, partially due to the shallow dip of the plate interface. We find large amplification factors of 2-5 for response spectra at periods of 1-10 s for locations in the Seattle and Tacoma basins, relative to sites outside the basins. This amplification depends on the direction of incoming waves and rupture directivity. The basin amplification is caused by surface waves generated at basin edges from incoming S-waves, as well as amplification and focusing of S-waves and surface waves by the 3D basin structure. The inter-event standard deviation of response spectral

  8. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  9. Application of Kalman filter in detecting pre-earthquake ionospheric TEC anomaly

    Directory of Open Access Journals (Sweden)

    Zhu Fuying

    2011-05-01

    Full Text Available : As an attempt, the Kalman filter was used to study the anomalous variations of ionospheric Total Electron Content (TEC before and after Wenchuan Ms8.0 earthquake, these TEC data were calculated from the GPS data observed by the Crustal Movement Observation Network of China. The result indicates that this method is reasonable and reliable in detecting TEC anomalies associated with large earthquakes.

  10. The Puerto Rico Seismic Network Broadcast System: A user friendly GUI to broadcast earthquake messages, to generate shakemaps and to update catalogues

    Science.gov (United States)

    Velez, J.; Huerfano, V.; von Hillebrandt, C.

    2007-12-01

    The Puerto Rico Seismic Network (PRSN) has historically provided locations and magnitudes for earthquakes in the Puerto Rico and Virgin Islands (PRVI) region. PRSN is the reporting authority for the region bounded by latitudes 17.0N to 20.0N, and longitudes 63.5W to 69.0W. The main objective of the PRSN is to record, process, analyze, provide information and research local, regional and teleseismic earthquakes, providing high quality data and information to be able to respond to the needs of the emergency management, academic and research communities, and the general public. The PRSN runs Earthworm software (Johnson et al, 1995) to acquire and write waveforms to disk for permanent archival. Automatic locations and alerts are generated for events in Puerto Rico, the Intra America Seas, and the Atlantic by the EarlyBird system (Whitmore and Sokolowski, 2002), which monitors PRSN stations as well as some 40 additional stations run by networks operating in North, Central and South America and other sites in the Caribbean. PRDANIS (Puerto Rico Data Analysis and Information System) software, developed by PRSN, supports manual locations and analyst review of automatic locations of events within the PRSN area of responsibility (AOR), using all the broadband, strong-motion and short-period waveforms Rapidly available information regarding the geographic distribution of ground shaking in relation to the population and infrastructure at risk can assist emergency response communities in efficient and optimized allocation of resources following a large earthquake. The ShakeMap system developed by the USGS provides near real-time maps of instrumental ground motions and shaking intensity and has proven effective in rapid assessment of the extent of shaking and potential damage after significant earthquakes (Wald, 2004). In Northern and Southern California, the Pacific Northwest, and the states of Utah and Nevada, ShakeMaps are used for emergency planning and response, loss

  11. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.

    2005-01-01

    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  12. Speeding Up Network Simulations Using Discrete Time

    OpenAIRE

    Lucas, Aaron; Armbruster, Benjamin

    2013-01-01

    We develop a way of simulating disease spread in networks faster at the cost of some accuracy. Instead of a discrete event simulation (DES) we use a discrete time simulation. This aggregates events into time periods. We prove a bound on the accuracy attained. We also discuss the choice of step size and do an analytical comparison of the computational costs. Our error bound concept comes from the theory of numerical methods for SDEs and the basic proof structure comes from the theory of numeri...

  13. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  14. Responses to the 2011 Earthquake on Facebook

    DEFF Research Database (Denmark)

    Hansen, Annette Skovsted

    In my investigation of how Japanese ODA policies and practices have engendered global networks, I have frequented the Association of Overseas Technical Scholarships (AOTS)' Facebook group. In the wake of the earthquake on March 11, 2011, many greetings came in from alumni who have within the last...

  15. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  16. Data Delivery Latency Improvements And First Steps Towards The Distributed Computing Of The Caltech/USGS Southern California Seismic Network Earthquake Early Warning System

    Science.gov (United States)

    Stubailo, I.; Watkins, M.; Devora, A.; Bhadha, R. J.; Hauksson, E.; Thomas, V. I.

    2016-12-01

    The USGS/Caltech Southern California Seismic Network (SCSN) is a modern digital ground motion seismic network. It develops and maintains Earthquake Early Warning (EEW) data collection and delivery systems in southern California as well as real-time EEW algorithms. Recently, Behr et al., SRL, 2016 analyzed data from several regional seismic networks deployed around the globe. They showed that the SCSN was the network with the smallest data communication delays or latency. Since then, we have reduced further the telemetry delays for many of the 330 current sites. The latency has been reduced on average from 2-6 sec to 0.4 seconds by tuning the datalogger parameters and/or deploying software upgrades. Recognizing the latency data as one of the crucial parameters in EEW, we have started archiving the per-packet latencies in mseed format for all the participating sites in a similar way it is traditionally done for the seismic waveform data. The archived latency values enable us to understand and document long-term changes in performance of the telemetry links. We can also retroactively investigate how latent the waveform data were during a specific event or during a specific time period. In addition the near-real time latency values are useful for monitoring and displaying the real-time station latency, in particular to compare different telemetry technologies. A future step to reduce the latency is to deploy the algorithms on the dataloggers at the seismic stations and transmit either the final solutions or intermediate parameters to a central processing center. To implement this approach, we are developing a stand-alone version of the OnSite algorithm to run on the dataloggers in the field. This will increase the resiliency of the SCSN to potential telemetry restrictions in the immediate aftermath of a large earthquake, either by allowing local alarming by the single station, or permitting transmission of lightweight parametric information rather than continuous

  17. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  18. Hardware-software system for simulating and analyzing earthquakes applied to civil structures

    Directory of Open Access Journals (Sweden)

    J. P. Amezquita-Sanchez

    2012-01-01

    Full Text Available The occurrence of recent strong earthquakes, the incessant worldwide movements of tectonic plates and the continuous ambient vibrations caused by traffic and wind have increased the interest of researchers in improving the capacity of energy dissipation to avoid damages to civil structures. Experimental testing of structural systems is essential for the understanding of physical behaviors and the building of appropriate analytic models in order to expose difficulties that may not have been considered in analytical studies. This paper presents a hardware-software system for exciting, monitoring and analyzing simultaneously a structure under earthquake signals and other types of signals in real-time. Effectiveness of the proposed system has been validated by experimental case studies and has been found to be a useful tool in the analysis of earthquake effects on structures.

  19. Performance of Irikura's Recipe Rupture Model Generator in Earthquake Ground Motion Simulations as Implemented in the Graves and Pitarka Hybrid Approach.

    Energy Technology Data Exchange (ETDEWEB)

    Pitarka, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-22

    We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity-­ based kinematic rupture model generator, as implemented in the hybrid broadband ground-­motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 -­ 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-­West2 Project (NGA-­West2) ground-­motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-­fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.

  20. The Key Role of Eyewitnesses in Rapid Impact Assessment of Global Earthquake

    Science.gov (United States)

    Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.; Etivant, C.; Frobert, L.; Godey, S.

    2014-12-01

    Uncertainties in rapid impact assessments of global earthquakes are intrinsically large because they rely on 3 main elements (ground motion prediction models, building stock inventory and related vulnerability) which values and/or spatial variations are poorly constrained. Furthermore, variations of hypocentral location and magnitude within their respective uncertainty domain can lead to significantly different shaking level for centers of population and change the scope of the disaster. We present the strategy and methods implemented at the Euro-Med Seismological Centre (EMSC) to rapidly collect in-situ observations on earthquake effects from eyewitnesses for reducing uncertainties of rapid earthquake impact assessment. It comprises crowdsourced information (online questionnaires, pics) as well as information derived from real time analysis of web traffic (flashourcing technique), and more recently deployment of QCN (Quake Catcher Network) low cost sensors. We underline the importance of merging results of different methods to improve performances and reliability of collected data.We try to better understand and respond to public demands and expectations after earthquakes through improved information services and diversification of information tools (social networks, smartphone app., browsers adds-on…), which, in turn, drive more eyewitnesses to our services and improve data collection. We will notably present our LastQuake Twitter feed (Quakebot) and smartphone applications (IOs and android) which only report earthquakes that matter for the public and authorities, i.e. felt and damaging earthquakes identified thanks to citizen generated information.

  1. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  2. Earthquake Preparedness and Education: A Collective Impact Approach to Improving Awareness and Resiliency

    Science.gov (United States)

    Benthien, M. L.; Wood, M. M.; Ballmann, J. E.; DeGroot, R. M.

    2017-12-01

    The Southern California Earthquake Center (SCEC), headquartered at the University of Southern California, is a collaboration of more than 1000 scientists and students from 70+ institutions. SCEC's Communication, Education, and Outreach (CEO) program translates earthquake science into products and activities in order to increase scientific literacy, develop a diverse scientific workforce, and reduce earthquake risk to life and property. SCEC CEO staff coordinate these efforts through partnership collaborations it has established to engage subject matter experts, reduce duplication of effort, and achieve greater results. Several of SCEC's collaborative networks began within Southern California and have since grown statewide (Earthquake Country Alliance, a public-private-grassroots partnership), national ("EPIcenter" Network of museums, parks, libraries, etc.), and international (Great ShakeOut Earthquake Drills with millions of participants each year). These networks have benefitted greatly from partnerships with national (FEMA), state, and local emergency managers. Other activities leverage SCEC's networks in new ways and with national earth science organizations, such as the EarthConnections Program (with IRIS, NAGT, and many others), Quake Catcher Network (with IRIS) and the GeoHazards Messaging Collaboratory (with IRIS, UNAVCO, and USGS). Each of these partnerships share a commitment to service, collaborative development, and the application of research (including social science theory for motivating preparedness behaviors). SCEC CEO is developing new evaluative structures and adapting the Collective Impact framework to better understand what has worked well or what can be improved, according to the framework's five key elements: create a common agenda; share common indicators and measurement; engage diverse stakeholders to coordinate mutually reinforcing activities; initiate continuous communication; and provide "backbone" support. This presentation will provide

  3. ESIM_DSN Web-Enabled Distributed Simulation Network

    Science.gov (United States)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  4. Earthquake Education in Prime Time

    Science.gov (United States)

    de Groot, R.; Abbott, P.; Benthien, M.

    2004-12-01

    Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and

  5. Comparative Analysis of Disruption Tolerant Network Routing Simulations in the One and NS-3

    Science.gov (United States)

    2017-12-01

    The added levels of simulation increase the processing required by a simulation . ns-3’s simulation of other layers of the network stack permits...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3...Thesis 03-23-2016 to 12-15-2017 4. TITLE AND SUBTITLE COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3 5

  6. Earthquake hazard in Northeast India – A seismic microzonation ...

    Indian Academy of Sciences (India)

    microzonation approach with typical case studies from .... the other hand, Guwahati city represents a case of well-formed basin with ... earthquake prone regions towards developing its ... tonic network and the observed seismicity has been.

  7. Developing an Agent-Based Simulation System for Post-Earthquake Operations in Uncertainty Conditions: A Proposed Method for Collaboration among Agents

    Directory of Open Access Journals (Sweden)

    Navid Hooshangi

    2018-01-01

    Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.

  8. Seismomagnetic effects from the long-awaited 28 September 2004 M 6.0 parkfield earthquake

    Science.gov (United States)

    Johnston, M.J.S.; Sasai, Y.; Egbert, G.D.; Mueller, R.J.

    2006-01-01

    Precise measurements of local magnetic fields have been obtained with a differentially connected array of seven synchronized proton magnetometers located along 60 km of the locked-to-creeping transition region of the San Andreas fault at Parkfield, California, since 1976. The M 6.0 Parkfield earthquake on 28 September 2004, occurred within this array and generated coseismic magnetic field changes of between 0.2 and 0.5 nT at five sites in the network. No preseismic magnetic field changes exceeding background noise levels are apparent in the magnetic data during the month, week, and days before the earthquake (or expected in light of the absence of measurable precursive deformation, seismicity, or pore pressure changes). Observations of electric and magnetic fields from 0.01 to 20 Hz are also made at one site near the end of the earthquake rupture and corrected for common-mode signals from the ionosphere/magnetosphere using a second site some 115 km to the northwest along the fault. These magnetic data show no indications of unusual noise before the earthquake in the ULF band (0.01-20 Hz) as suggested may have preceded the 1989 ML 7.1 Loma Prieta earthquake. Nor do we see electric field changes similar to those suggested to occur before earthquakes of this magnitude from data in Greece. Uniform and variable slip piezomagnetic models of the earthquake, derived from strain, displacement, and seismic data, generate magnetic field perturbations that are consistent with those observed by the magnetometer array. A higher rate of longer-term magnetic field change, consistent with increased loading in the region, is apparent since 1993. This accompanied an increased rate of secular shear strain observed on a two-color EDM network and a small network of borehole tensor strainmeters and increased seismicity dominated by three M 4.5-5 earthquakes roughly a year apart in 1992, 1993, and 1994. Models incorporating all of these data indicate increased slip at depth in the region

  9. FDM simulation of earthquakes off western Kyushu, Japan, using a land-ocean unified 3D structure model

    Science.gov (United States)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Hara, Tatsuhiko

    2017-07-01

    Seismic activity occurred off western Kyushu, Japan, at the northern end of the Okinawa Trough on May 6, 2016 (14:11 JST), 22 days after the onset of the 2016 Kumamoto earthquake sequence. The area is adjacent to the Beppu-Shimabara graben where the 2016 Kumamoto earthquake sequence occurred. In the area off western Kyushu, a M7.1 earthquake also occurred on November 14, 2015 (5:51 JST), and a tsunami with a height of 0.3 m was observed. In order to better understand these seismic activity and tsunamis, it is necessary to study the sources of, and strong motions due to, earthquakes in the area off western Kyushu. For such studies, validation of synthetic waveforms is important because of the presence of the oceanic water layer and thick sediments in the source area. We show the validation results for synthetic waveforms through nonlinear inversion analyses of small earthquakes ( M5). We use a land-ocean unified 3D structure model, 3D HOT finite-difference method ("HOT" stands for Heterogeneity, Ocean layer and Topography) and a multi-graphic processing unit (GPU) acceleration to simulate the wave propagations. We estimate the first-motion augmented moment tensor (FAMT) solution based on both the long-period surface waves and short-period body waves. The FAMT solutions systematically shift landward by about 13 km, on average, from the epicenters determined by the Japan Meteorological Agency. The synthetics provide good reproductions of the observed full waveforms with periods of 10 s or longer. On the other hand, for waveforms with shorter periods (down to 4 s), the later surface waves are not reproduced well, while the first parts of the waveforms (comprising P- and S-waves) are reproduced to some extent. These results indicate that the current 3D structure model around Kyushu is effective for generating full waveforms, including surface waves with periods of about 10 s or longer. Based on these findings, we analyze the 2015 M7.1 event using the cross

  10. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  11. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  12. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  13. a Collaborative Cyberinfrastructure for Earthquake Seismology

    Science.gov (United States)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  14. Dynamic rupture scenarios from Sumatra to Iceland - High-resolution earthquake source physics on natural fault systems

    Science.gov (United States)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie

    2017-04-01

    Capturing the observed complexity of earthquake sources in dynamic rupture simulations may require: non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure. All of these factors have been independently shown to alter dynamic rupture behavior and thus possibly influence the degree of realism attainable via simulated ground motions. In this presentation we will show examples of high-resolution earthquake scenarios, e.g. based on the 2004 Sumatra-Andaman Earthquake, the 1994 Northridge earthquake and a potential rupture of the Husavik-Flatey fault system in Northern Iceland. The simulations combine a multitude of representations of source complexity at the necessary spatio-temporal resolution enabled by excellent scalability on modern HPC systems. Such simulations allow an analysis of the dominant factors impacting earthquake source physics and ground motions given distinct tectonic settings or distinct focuses of seismic hazard assessment. Across all simulations, we find that fault geometry concurrently with the regional background stress state provide a first order influence on source dynamics and the emanated seismic wave field. The dynamic rupture models are performed with SeisSol, a software package based on an ADER-Discontinuous Galerkin scheme for solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. Use of unstructured tetrahedral meshes allows for a realistic representation of the non-planar fault geometry, subsurface structure and bathymetry. The results presented highlight the fact that modern numerical methods are essential to further our understanding of earthquake source physics and complement both physic-based ground motion research and empirical approaches in seismic hazard analysis.

  15. Simulation studies of a wide area health care network.

    Science.gov (United States)

    McDaniel, J. G.

    1994-01-01

    There is an increasing number of efforts to install wide area health care networks. Some of these networks are being built to support several applications over a wide user base consisting primarily of medical practices, hospitals, pharmacies, medical laboratories, payors, and suppliers. Although on-line, multi-media telecommunication is desirable for some purposes such as cardiac monitoring, store-and-forward messaging is adequate for many common, high-volume applications. Laboratory test results and payment claims, for example, can be distributed using electronic messaging networks. Several network prototypes have been constructed to determine the technical problems and to assess the effectiveness of electronic messaging in wide area health care networks. Our project, Health Link, developed prototype software that was able to use the public switched telephone network to exchange messages automatically, reliably and securely. The network could be configured to accommodate the many different traffic patterns and cost constraints of its users. Discrete event simulations were performed on several network models. Canonical star and mesh networks, that were composed of nodes operating at steady state under equal loads, were modeled. Both topologies were found to support the throughput of a generic wide area health care network. The mean message delivery time of the mesh network was found to be less than that of the star network. Further simulations were conducted for a realistic large-scale health care network consisting of 1,553 doctors, 26 hospitals, four medical labs, one provincial lab and one insurer. Two network topologies were investigated: one using predominantly peer-to-peer communication, the other using client-server communication.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7949966

  16. MyShake: Building a smartphone seismic network

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.

    2014-12-01

    We are in the process of building up a smartphone seismic network. In order to build this network, we did shake table tests to evaluate the performance of the smartphones as seismic recording instruments. We also conducted noise floor test to find the minimum earthquake signal we can record using smartphones. We added phone noises to the strong motion data from past earthquakes, and used these as an analogy dataset to test algorithms and to understand the difference of using the smartphone network and the traditional seismic network. We also built a prototype system to trigger the smartphones from our server to record signals which can be sent back to the server in near real time. The phones can also be triggered by our developed algorithm running locally on the phone, if there's an earthquake occur to trigger the phones, the signal recorded by the phones will be sent back to the server. We expect to turn the prototype system into a real smartphone seismic network to work as a supplementary network to the existing traditional seismic network.

  17. Hypocentre estimation of induced earthquakes in Groningen

    NARCIS (Netherlands)

    Spetzler, J.; Dost, Bernard

    2017-01-01

    Induced earthquakes due to gas production have taken place in the province of Groningen in the northeast of The Netherlands since 1986. In the first years of seismicity, a sparse seismological network with large station distances from the seismogenic area in Groningen was used. The location of

  18. Earthquake risk assessment of Alexandria, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Gaber, Hanan; Ibrahim, Hamza

    2015-01-01

    Throughout historical and recent times, Alexandria has suffered great damage due to earthquakes from both near- and far-field sources. Sometimes, the sources of such damages are not well known. During the twentieth century, the city was shaken by several earthquakes generated from inland dislocations (e.g., 29 Apr. 1974, 12 Oct. 1992, and 28 Dec. 1999) and the African continental margin (e.g., 12 Sept. 1955 and 28 May 1998). Therefore, this study estimates the earthquake ground shaking and the consequent impacts in Alexandria on the basis of two earthquake scenarios. The simulation results show that Alexandria affected by both earthquakes scenarios relatively in the same manner despite the number of casualties during the first scenario (inland dislocation) is twice larger than the second one (African continental margin). An expected percentage of 2.27 from Alexandria's total constructions (12.9 millions, 2006 Census) will be affected, 0.19 % injuries and 0.01 % deaths of the total population (4.1 millions, 2006 Census) estimated by running the first scenario. The earthquake risk profile reveals that three districts (Al-Montazah, Al-Amriya, and Shark) lie in high seismic risks, two districts (Gharb and Wasat) are in moderate, and two districts (Al-Gomrok and Burg El-Arab) are in low seismic risk level. Moreover, the building damage estimations reflect that Al-Montazah is the highest vulnerable district whereas 73 % of expected damages were reported there. The undertaken analysis shows that the Alexandria urban area faces high risk. Informal areas and deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated earthquake risks (buildings damages) are concentrated at the most densely populated (Al-Montazah, Al-Amriya, and Shark) districts. Moreover, about 75 % of casualties are in the same districts.

  19. Using Earthquake Analysis to Expand the Oklahoma Fault Database

    Science.gov (United States)

    Chang, J. C.; Evans, S. C.; Walter, J. I.

    2017-12-01

    The Oklahoma Geological Survey (OGS) is compiling a comprehensive Oklahoma Fault Database (OFD), which includes faults mapped in OGS publications, university thesis maps, and industry-contributed shapefiles. The OFD includes nearly 20,000 fault segments, but the work is far from complete. The OGS plans on incorporating other sources of data into the OFD, such as new faults from earthquake sequence analyses, geologic field mapping, active-source seismic surveys, and potential fields modeling. A comparison of Oklahoma seismicity and the OFD reveals that earthquakes in the state appear to nucleate on mostly unmapped or unknown faults. Here, we present faults derived from earthquake sequence analyses. From 2015 to present, there has been a five-fold increase in realtime seismic stations in Oklahoma, which has greatly expanded and densified the state's seismic network. The current seismic network not only improves our threshold for locating weaker earthquakes, but also allows us to better constrain focal plane solutions (FPS) from first motion analyses. Using nodal planes from the FPS, HypoDD relocation, and historic seismic data, we can elucidate these previously unmapped seismogenic faults. As the OFD is a primary resource for various scientific investigations, the inclusion of seismogenic faults improves further derivative studies, particularly with respect to seismic hazards. Our primal focus is on four areas of interest, which have had M5+ earthquakes in recent Oklahoma history: Pawnee (M5.8), Prague (M5.7), Fairview (M5.1), and Cushing (M5.0). Subsequent areas of interest will include seismically active data-rich areas, such as the central and northcentral parts of the state.

  20. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  1. An overview of a possible approach to calculate rock movements due to earthquakes at Finnish nuclear waste repository sites

    International Nuclear Information System (INIS)

    LaPointe, P.R.; Cladouhos, T.T.

    1999-02-01

    The report outlines a possible approach to estimating rock movements due to earthquakes that may diminish canister safety. The method is based upon an approach developed for studying similar problems in Sweden at three generic Swedish sites. In the first part of the report, the problem of rock movements during earthquakes is described. The second section of the report outlines the approach used to estimate rock movements in Sweden, and discusses how the approach could be adapted to evaluating movements at Finnish repositories. This section also discusses data needs and potential problems in applying the approach in Finland. The next section presents some simple earthquake calculations for the four Finnish sites. These simulations use the discrete fracture network model geometric parameters developed by VTT (Technical Research Centre of Finland) for the use in hydrological calculations. The calculations are not meant for performance assessment purposes for reasons discussed in the report, but are designed to show (1) the importance of fracture size, intensity and orientation on induced displacement magnitudes; (2) the need for additional studies with regards to fracture size and intensity; and (3) the need to resolve issues regarding the role of post-glacial faulting, glacial rebound and tectonic processes in present-day and future earthquakes. (orig.)

  2. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    Science.gov (United States)

    Fojtíková, Lucia; Kristeková, Miriam; Málek, Jiří; Sokos, Efthimios; Csicsay, Kristián; Zahradník, Jiří

    2016-01-01

    Extension of permanent seismic networks is usually governed by a number of technical, economic, logistic, and other factors. Planned upgrade of the network can be justified by theoretical assessment of the network capability in terms of reliable estimation of the key earthquake parameters (e.g., location and focal mechanisms). It could be useful not only for scientific purposes but also as a concrete proof during the process of acquisition of the funding needed for upgrade and operation of the network. Moreover, the theoretical assessment can also identify the configuration where no improvement can be achieved with additional stations, establishing a tradeoff between the improvement and additional expenses. This paper presents suggestion of a combination of suitable methods and their application to the Little Carpathians local seismic network (Slovakia, Central Europe) monitoring epicentral zone important from the point of seismic hazard. Three configurations of the network are considered: 13 stations existing before 2011, 3 stations already added in 2011, and 7 new planned stations. Theoretical errors of the relative location are estimated by a new method, specifically developed in this paper. The resolvability of focal mechanisms determined by waveform inversion is analyzed by a recent approach based on 6D moment-tensor error ellipsoids. We consider potential seismic events situated anywhere in the studied region, thus enabling "mapping" of the expected errors. Results clearly demonstrate that the network extension remarkably decreases the errors, mainly in the planned 23-station configuration. The already made three-station extension of the network in 2011 allowed for a few real data examples. Free software made available by the authors enables similar application in any other existing or planned networks.

  3. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    Science.gov (United States)

    Plotkin, V. V.

    A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  4. GPS detection of ionospheric perturbation before the 13 February 2001, El Salvador earthquake

    Directory of Open Access Journals (Sweden)

    V. V. Plotkin

    2003-01-01

    Full Text Available A large earthquake of M6.6 occurred on 13 February 2001 at 14:22:05 UT in El Salvador. We detected ionospheric perturbation before this earthquake using GPS data received from CORS network. Systematic decreases of ionospheric total electron content during two days before the earthquake onset were observed at set of stations near the earthquake location and probably in region of about 1000 km from epicenter. This result is consistent with that of investigators, which studied these phenomena with several observational techniques. However it is possible, that such TEC changes are simultaneously accompanied by changes due to solar wind parameters and Kp -index.

  5. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  6. First results from the new K2-network in Romania: Source- and site-parameters of the April 28, 1999 intermediate depth Vrancea earthquake

    International Nuclear Information System (INIS)

    Bonjer, K.-P.; Oncescu, L.; Rizescu, M.; Enescu, D.; Radulian, M.; Ionescu, C.; Moldoveanu, T.; Lungu, D.; Stempniewski, L.

    2002-01-01

    In the past five years the Collaborative Research Center 461 'Strong Earthquakes' of Karlsruhe University and the National Institute for Earth Physics, Bucharest-Magurele have installed jointly a network of 36 free-field stations in Romania. The stations are equipped with Kinemetrics K2-dataloggers, three-component episensors and GPS timing system. Most stations have velocity transducers in addition. The network is centered around the Vrancea focal zone and covers an area with a diameter of up to 500 km. Nine stations of the net are deployed in the Romanian capital Bucharest in nearly free-field conditions. Furthermore, at the Building Research Institute (INCERC) a test building and a borehole is instrumented with K2-systems. So far the top floor of a typical 10-story building is instrumented as well. The Vrancea earthquake of April 28, 1999 has been recorded by 28 stations of the new strong motion network. Although the moment magnitude was M w =5.3, no damage occurred, due to the great focal depth of 159 km. The fault-plane solution shows a nearly pure thrust mechanism (strike=171 angle, dip=53 angle, rake=106 angle), which is typical for most of the Vrancea intermediate-depth earthquakes. The strike of the B-axis is within the range of those of the background seismicity but rotated counterclockwise by about 50 angle in comparison to those of the big events. Due to the relatively dense station distribution, the lateral variation of the pattern of the peak ground motion could be well constrained. The PGA is very low (less than 5 cm/s 2 ) in Transylvania and in the mountainous areas of the Carpathians as well as in the eastern part of the Dobrogea/coastal range of the Black Sea, whereas values of around 40 cm/s 2 are found in a stripe of 80 km width, located in the outer part of the Carpathians arc and ranging from Bucharest to about 200 km towards NE. Details of the distribution in Bucharest will be discussed. (authors)

  7. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  8. Simulated earthquake testing of naturally aged C and D LCU-13 station battery cells

    International Nuclear Information System (INIS)

    Tulk, J.D.; Black, D.A.; Janis, W.J.; Royce, C.J.

    1985-03-01

    A sample of 10-year-old lead-acid storage batteries from the North Anna Nuclear Power Station (Virginia Electric and Power Company) were tested on a shaker table. Seven cells were subjected to simulated earthquakes with a ZPA of approximately 1.5 g. All seven delivered uninterrupted power during the shaker tests and were able to pass a post-seismic capacity test. Two cells were shaken to higher intensities (ZPA approximately equal to 2 g). These cells provided uninterrupted power during the shaker tests, but had post-seismic capacities that were below the required level for Class1E battery cells. After the tests, several cells were disassembled and examined. Internal components were in good condition with limited oxidization and plate cracking

  9. Simulated epidemics in an empirical spatiotemporal network of 50,185 sexual contacts.

    Directory of Open Access Journals (Sweden)

    Luis E C Rocha

    2011-03-01

    Full Text Available Sexual contact patterns, both in their temporal and network structure, can influence the spread of sexually transmitted infections (STI. Most previous literature has focused on effects of network topology; few studies have addressed the role of temporal structure. We simulate disease spread using SI and SIR models on an empirical temporal network of sexual contacts in high-end prostitution. We compare these results with several other approaches, including randomization of the data, classic mean-field approaches, and static network simulations. We observe that epidemic dynamics in this contact structure have well-defined, rather high epidemic thresholds. Temporal effects create a broad distribution of outbreak sizes, even if the per-contact transmission probability is taken to its hypothetical maximum of 100%. In general, we conclude that the temporal correlations of our network accelerate outbreaks, especially in the early phase of the epidemics, while the network topology (apart from the contact-rate distribution slows them down. We find that the temporal correlations of sexual contacts can significantly change simulated outbreaks in a large empirical sexual network. Thus, temporal structures are needed alongside network topology to fully understand the spread of STIs. On a side note, our simulations further suggest that the specific type of commercial sex we investigate is not a reservoir of major importance for HIV.

  10. Fluid flows due to earthquakes with reference to Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Davies, J.B.

    1993-01-01

    Yucca Mountain geohydrology is dominated by a deep water table in volcanic tuffa beds which are cut by numerous faults. Certain zones in these tuffas and most of the fault apertures are filled with a fine-grained calcitic cement. Earthquakes have occured in this region with the most recent being of magnitude 5.6 and at a distance of about 20 km. Earthquakes in western U.S.A. have been observed to cause fluid flows through and out of the crust of the Earth. These flows are concentrated along the faults with normal faulting producing the largest flows. An earthquake produces rapid pressure changes at and below the ground surface, thereby forcing flows of gas, water, slurries and dissolved salts. In order to examine the properties of flows produced by earthquakes, we simulate the phenomena using computer-based modeling. We investigate the effects of adults and high permeability zones on the pattern of flows induced by the earthquake. We demonstrate that faults act as conduits to the surface and that the higher the permeability of a zone, the more the flows will concentrate there. Numerical estimates of flow rates from these simulations compare favorably with data from observed flows due to earthquakes. Simple volumetric arguments demonstrate the ease with which fluids from the deep water table can reach the surface along fault conduits

  11. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  12. Future planning: default network activity couples with frontoparietal control network and reward-processing regions during process and outcome simulations.

    Science.gov (United States)

    Gerlach, Kathy D; Spreng, R Nathan; Madore, Kevin P; Schacter, Daniel L

    2014-12-01

    We spend much of our daily lives imagining how we can reach future goals and what will happen when we attain them. Despite the prevalence of such goal-directed simulations, neuroimaging studies on planning have mainly focused on executive processes in the frontal lobe. This experiment examined the neural basis of process simulations, during which participants imagined themselves going through steps toward attaining a goal, and outcome simulations, during which participants imagined events they associated with achieving a goal. In the scanner, participants engaged in these simulation tasks and an odd/even control task. We hypothesized that process simulations would recruit default and frontoparietal control network regions, and that outcome simulations, which allow us to anticipate the affective consequences of achieving goals, would recruit default and reward-processing regions. Our analysis of brain activity that covaried with process and outcome simulations confirmed these hypotheses. A functional connectivity analysis with posterior cingulate, dorsolateral prefrontal cortex and anterior inferior parietal lobule seeds showed that their activity was correlated during process simulations and associated with a distributed network of default and frontoparietal control network regions. During outcome simulations, medial prefrontal cortex and amygdala seeds covaried together and formed a functional network with default and reward-processing regions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. The 2011 Tohoku-oki Earthquake related to a large velocity gradient within the Pacific plate

    Science.gov (United States)

    Matsubara, Makoto; Obara, Kazushige

    2015-04-01

    We conduct seismic tomography using arrival time data picked by the high sensitivity seismograph network (Hi-net) operated by National Research Institute for Earth Science and Disaster Prevention (NIED). We used earthquakes off the coast outside the seismic network around the source region of the 2011 Tohoku-oki Earthquake with the centroid depth estimated from moment tensor inversion by NIED F-net (broadband seismograph network) as well as earthquakes within the seismic network determined by Hi-net. The target region, 20-48N and 120-148E, covers the Japanese Islands from Hokkaido to Okinawa. A total of manually picked 4,622,346 P-wave and 3,062,846 S-wave arrival times for 100,733 earthquakes recorded at 1,212 stations from October 2000 to August 2009 is available for use in the tomographic method. In the final iteration, we estimate the P-wave slowness at 458,234 nodes and the S-wave slowness at 347,037 nodes. The inversion reduces the root mean square of the P-wave traveltime residual from 0.455 s to 0.187 s and that of the S-wave data from 0.692 s to 0.228 s after eight iterations (Matsubara and Obara, 2011). Centroid depths are determined using a Green's function approach (Okada et al., 2004) such as in NIED F-net. For the events distant from the seismic network, the centroid depth is more reliable than that determined by NIED Hi-net, since there are no stations above the hypocenter. We determine the upper boundary of the Pacific plate based on the velocity structure and earthquake hypocentral distribution. The upper boundary of the low-velocity (low-V) oceanic crust corresponds to the plate boundary where thrust earthquakes are expected to occur. Where we do not observe low-V oceanic crust, we determine the upper boundary of the upper layer of the double seismic zone within high-V Pacific plate. We assume the depth at the Japan Trench as 7 km. We can investigate the velocity structure within the Pacific plate such as 10 km beneath the plate boundary since the

  14. Earthquakes of Garhwal Himalaya region of NW Himalaya, India: A study of relocated earthquakes and their seismogenic source and stress

    Science.gov (United States)

    R, A. P.; Paul, A.; Singh, S.

    2017-12-01

    Since the continent-continent collision 55 Ma, the Himalaya has accommodated 2000 km of convergence along its arc. The strain energy is being accumulated at a rate of 37-44 mm/yr and releases at time as earthquakes. The Garhwal Himalaya is located at the western side of a Seismic Gap, where a great earthquake is overdue atleast since 200 years. This seismic gap (Central Seismic Gap: CSG) with 52% probability for a future great earthquake is located between the rupture zones of two significant/great earthquakes, viz. the 1905 Kangra earthquake of M 7.8 and the 1934 Bihar-Nepal earthquake of M 8.0; and the most recent one, the 2015 Gorkha earthquake of M 7.8 is in the eastern side of this seismic gap (CSG). The Garhwal Himalaya is one of the ideal locations of the Himalaya where all the major Himalayan structures and the Himalayan Seimsicity Belt (HSB) can ably be described and studied. In the present study, we are presenting the spatio-temporal analysis of the relocated local micro-moderate earthquakes, recorded by a seismicity monitoring network, which is operational since, 2007. The earthquake locations are relocated using the HypoDD (double difference hypocenter method for earthquake relocations) program. The dataset from July, 2007- September, 2015 have been used in this study to estimate their spatio-temporal relationships, moment tensor (MT) solutions for the earthquakes of M>3.0, stress tensors and their interactions. We have also used the composite focal mechanism solutions for small earthquakes. The majority of the MT solutions show thrust type mechanism and located near the mid-crustal-ramp (MCR) structure of the detachment surface at 8-15 km depth beneath the outer lesser Himalaya and higher Himalaya regions. The prevailing stress has been identified to be compressional towards NNE-SSW, which is the direction of relative plate motion between the India and Eurasia continental plates. The low friction coefficient estimated along with the stress inversions

  15. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    Science.gov (United States)

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  16. Impact of the 2001 Tohoku-oki earthquake to Tokyo Metropolitan area observed by the Metropolitan Seismic Observation network (MeSO-net)

    Science.gov (United States)

    Hirata, N.; Hayashi, H.; Nakagawa, S.; Sakai, S.; Honda, R.; Kasahara, K.; Obara, K.; Aketagawa, T.; Kimura, H.; Sato, H.; Okaya, D. A.

    2011-12-01

    The March 11, 2011 Tohoku-oki earthquake brought a great impact to the Tokyo metropolitan area in both seismological aspect and seismic risk management although Tokyo is located 340 km from the epicenter. The event generated very strong ground motion even in the metropolitan area and resulted severe requifaction in many places of Kanto district. National and local governments have started to discuss counter measurement for possible seismic risks in the area taking account for what they learned from the Tohoku-oki event which is much larger than ever experienced in Japan Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9). An M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that an M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. We will discuss the main results that are obtained in the respective fields which have been integrated to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area; the project has been much improved after the Tohoku event. In order to image seismic structure beneath the Metropolitan Tokyo area we have developed Metropolitan Seismic Observation network (MeSO-net; Hirata et al., 2009). We have installed 296 seismic stations every few km (Kasahara et al., 2011). We conducted seismic

  17. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Science.gov (United States)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  18. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  19. Magnitude Estimation for Large Earthquakes from Borehole Recordings

    Science.gov (United States)

    Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.

    2012-12-01

    We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).

  20. A new strategy for earthquake focal mechanisms using waveform-correlation-derived relative polarities and cluster analysis: Application to the 2014 Long Valley Caldera earthquake swarm

    Science.gov (United States)

    Shelly, David R.; Hardebeck, Jeanne L.; Ellsworth, William L.; Hill, David P.

    2016-01-01

    In microseismicity analyses, reliable focal mechanisms can typically be obtained for only a small subset of located events. We address this limitation here, presenting a framework for determining robust focal mechanisms for entire populations of very small events. To achieve this, we resolve relative P and S wave polarities between pairs of waveforms by using their signed correlation coefficients—a by-product of previously performed precise earthquake relocation. We then use cluster analysis to group events with similar patterns of polarities across the network. Finally, we apply a standard mechanism inversion to the grouped data, using either catalog or correlation-derived P wave polarity data sets. This approach has great potential for enhancing analyses of spatially concentrated microseismicity such as earthquake swarms, mainshock-aftershock sequences, and industrial reservoir stimulation or injection-induced seismic sequences. To demonstrate its utility, we apply this technique to the 2014 Long Valley Caldera earthquake swarm. In our analysis, 85% of the events (7212 out of 8494 located by Shelly et al. [2016]) fall within five well-constrained mechanism clusters, more than 12 times the number with network-determined mechanisms. Of the earthquakes we characterize, 3023 (42%) have magnitudes smaller than 0.0. We find that mechanism variations are strongly associated with corresponding hypocentral structure, yet mechanism heterogeneity also occurs where it cannot be resolved by hypocentral patterns, often confined to small-magnitude events. Small (5–20°) rotations between mechanism orientations and earthquake location trends persist when we apply 3-D velocity models and might reflect a geometry of en echelon, interlinked shear, and dilational faulting.

  1. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  2. Analiza karakteristika MPLS simulatora / Characteristics analyse of the MPLS network simulator

    Directory of Open Access Journals (Sweden)

    Boban Pavlović

    2005-05-01

    Full Text Available U ovom radu predstavljena je arhitektura i analizirane su karakteristike MPLS mrežnog simulatora MNS (Multiprotocol Label Switching Network Simulator, koji se koristi u projektovanju paketskih mreža zasnovanih na IP (Internet Protocol protokolu koje moraju podržavati saobraćaj u realnom vremenu i multimedijalni saobraćaj. Na bazi mrežnog simulatora prikazane su i opisane procedure simulacije saobraćaja različitog QoS, kao i simulacija posluživanja saobraćaja višeg prioriteta. / In this article are presented architecture and characteristics analyze of the MPLS Network Simulator (MNS. MNS is used for design packet networks based on Internet Protocol (IP which must support Real-time Traffic and Multimedia. In this document are presented and described simulation procedure for traffics -with different QoS and simulation for resource preemption.

  3. NRIAG's Effort to Mitigate Earthquake Disasters in Egypt Using GPS and Seismic Data

    Science.gov (United States)

    Mahmoud, Salah

    It has been estimated that, during historical time more than 50 million people have lost their lives in earthquakes during ground shaking, such as soil amplification and/or liquefaction, landslides and tsunamis or its immediate aftereffects, as fires. The distribution of population takes generally no account of earthquake risk, at least on a large scale. An earthquake may be large but not destructive, on the other hand, an earthquake may be destructive but not large. The absence of correlation is due to the fact that, great number of other factors entering into consideration: first of all, the location of the earthquake in relation to populated areas, also soil conditions and building constructions. Soil liquefaction has been identified as the underlying phenomenon for many ground failures, settlements and lateral spreads, which are a major cause of damage to soil structures and building foundations in many events. Egypt is suffered a numerous of destructive earthquakes as well as Kalabsha earthquake (1981, Mag 5.4) near Aswan city and the High dam, Dahshour earthquake (1992, Mag 5.9) near Cairo city and Aqaba earthquake (1995, Mag 7.2). As the category of earthquake damage includes all the phenomena related to the direct and indirect damages, the Egyptian authorities do a great effort to mitigate the earthquake disasters. The seismicity especially at the zones of high activity is investigated in details in order to obtain the active source zones not only by the Egyptian National Seismic Network (ENSN) but also by the local seismic networks at, Aswan, Hurghada, Aqaba, Abu Dabbab and Dabbaa. On the other hand the soil condition, soil amplification, soil structure interaction, liquefaction and seismic hazard are carried out in particular the urbanized areas and the region near the source zones. All these parameters are integrated to obtain the Egyptian building code which is valid to construct buildings resist damages and consequently mitigate the earthquake

  4. The use of waveform shapes to automatically determine earthquake focal depth

    Science.gov (United States)

    Sipkin, S.A.

    2000-01-01

    Earthquake focal depth is an important parameter for rapidly determining probable damage caused by a large earthquake. In addition, it is significant both for discriminating between natural events and explosions and for discriminating between tsunamigenic and nontsunamigenic earthquakes. For the purpose of notifying emergency management and disaster relief organizations as well as issuing tsunami warnings, potential time delays in determining source parameters are particularly detrimental. We present a method for determining earthquake focal depth that is well suited for implementation in an automated system that utilizes the wealth of broadband teleseismic data that is now available in real time from the global seismograph networks. This method uses waveform shapes to determine focal depth and is demonstrated to be valid for events with magnitudes as low as approximately 5.5.

  5. Twitter Seismology: Earthquake Monitoring and Response in a Social World

    Science.gov (United States)

    Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G.

    2011-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment USGS earthquake response products and the delivery of hazard information. The potential uses of Twitter for earthquake response include broadcasting earthquake alerts, rapidly detecting widely felt events, qualitatively assessing earthquake damage effects, communicating with the public, and participating in post-event collaboration. Several seismic networks and agencies are currently distributing Twitter earthquake alerts including the European-Mediterranean Seismological Centre (@LastQuake), Natural Resources Canada (@CANADAquakes), and the Indonesian meteorological agency (@infogempabmg); the USGS will soon distribute alerts via the @USGSted and @USGSbigquakes Twitter accounts. Beyond broadcasting alerts, the USGS is investigating how to use tweets that originate near the epicenter to detect and characterize shaking events. This is possible because people begin tweeting immediately after feeling an earthquake, and their short narratives and exclamations are available for analysis within 10's of seconds of the origin time. Using five months of tweets that contain the word "earthquake" and its equivalent in other languages, we generate a tweet-frequency time series. The time series clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a simple Short-Term-Average / Long-Term-Average algorithm similar to that commonly used to detect seismic phases. As with most auto-detection algorithms, the parameters can be tuned to catch more or less events at the cost of more or less false triggers. When tuned to a moderate sensitivity, the detector found 48 globally-distributed, confirmed seismic events with only 2 false triggers. A space-shuttle landing and "The Great California ShakeOut" caused the false triggers. This number of

  6. Continuous micro-earthquake catalogue of the central Southern Alps, New Zealand

    Science.gov (United States)

    Michailos, Konstantinos; Townend, John; Savage, Martha; Chamberlain, Calum

    2017-04-01

    The Alpine Fault is one of the most prominent tectonic features in the South Island, New Zealand, and is inferred to be late in its seismic cycle of M 8 earthquakes based on paleoseismological evidence. Despite this, the Alpine Fault displays low levels of contemporary seismic activity, with little documented on-fault seismicity. This low magnitude seismicity, often below the completeness level of the GeoNet national seismic catalogue, may inform us of changes in fault character along-strike and might be used for rupture simulations and hazard planning. Thus, compiling a micro-earthquake catalogue for the Southern Alps prior to an expected major earthquake is of great interest. Areas of low seismic activity, like the central part of the Alpine Fault, require data recorded over a long duration to reveal temporal and spatial seismicity patterns and provide a better understanding for the processes controlling seismogenesis. The continuity and density of the Southern Alps Microearthquake Borehole Array (SAMBA; deployed in late 2008) allows us to study seismicity in the Southern Alps over a more extended time period than has ever been done previously. Furthermore, by using data from other temporary networks (e.g. WIZARD, ALFA08, DFDP-10) we are able to extend the region covered. To generate a spatially and temporally continuous catalogue of seismicity in New Zealand's central Southern Alps, we used automatic detection and phase-picking methods. We used an automatic phase-picking method for both P- and S- wave arrivals (kPick; Rawles and Thurber, 2015). Using almost 8 years of seismic data we calculated about 9,000 preliminary earthquake. The seismicity is clustered and scattered and a previously observed seismic gap between the Wanganui and Whataroa rivers is also identified.

  7. Earthquake Risk Management of Underground Lifelines in the Urban Area of Catania

    International Nuclear Information System (INIS)

    Grasso, S.; Maugeri, M.

    2008-01-01

    Lifelines typically include the following five utility networks: potable water, sewage natural gas, electric power, telecommunication and transportation system. The response of lifeline systems, like gas and water networks, during a strong earthquake, can be conveniently evaluated with the estimated average number of ruptures per km of pipe. These ruptures may be caused either by fault ruptures crossing, or by permanent deformations of the soil mass (landslides, liquefaction), or by transient soil deformations caused by seismic wave propagation. The possible consequences of damaging earthquakes on transportation systems may be the reduction or the interruption of traffic flow, as well as the impact on the emergency response and on the recovery assistance. A critical element in the emergency management is the closure of roads due to fallen obstacles and debris of collapsed buildings.The earthquake-induced damage to buried pipes is expressed in terms of repair rate (RR), defined as the number of repairs divided by the pipe length (km) exposed to a particular level of seismic demand; this number is a function of the pipe material (and joint type), of the pipe diameter and of the ground shaking level, measured in terms of peak horizontal ground velocity (PGV) or permanent ground displacement (PGD). The development of damage algorithms for buried pipelines is primarily based on empirical evidence, tempered with engineering judgment and sometimes by analytical formulations.For the city of Catania, in the present work use has been made of the correlation between RR and peak horizontal ground velocity by American Lifelines Alliance (ALA, 2001), for the verifications of main buried pipelines. The performance of the main buried distribution networks has been evaluated for the Level I earthquake scenario (January 11, 1693 event I = XI, M 7.3) and for the Level II earthquake scenario (February 20, 1818 event I = IX, M 6.2).Seismic damage scenario of main gas pipelines and

  8. Mesoscopic Simulations of Crosslinked Polymer Networks

    Science.gov (United States)

    Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.

    2016-08-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.

  9. Flash sourcing, or rapid detection and characterization of earthquake effects through website traffic analysis

    Directory of Open Access Journals (Sweden)

    Laurent Frobert

    2011-06-01

    Full Text Available

    This study presents the latest developments of an approach called ‘flash sourcing’, which provides information on the effects of an earthquake within minutes of its occurrence. Information is derived from an analysis of the website traffic surges of the European–Mediterranean Seismological Centre website after felt earthquakes. These surges are caused by eyewitnesses to a felt earthquake, who are the first who are informed of, and hence the first concerned by, an earthquake occurrence. Flash sourcing maps the felt area, and at least in some circumstances, the regions affected by severe damage or network disruption. We illustrate how the flash-sourced information improves and speeds up the delivery of public earthquake information, and beyond seismology, we consider what it can teach us about public responses when experiencing an earthquake. Future developments should improve the description of the earthquake effects and potentially contribute to the improvement of the efficiency of earthquake responses by filling the information gap after the occurrence of an earthquake.

  10. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  11. Accelerations from the September 5, 2012 (Mw=7.6) Nicoya, Costa Rica Earthquake

    Science.gov (United States)

    Simila, G. W.; Quintero, R.; Burgoa, B.; Mohammadebrahim, E.; Segura, J.

    2013-05-01

    Since 1984, the Seismic Network of the Volcanological and Seismological Observatory of Costa Rica, Universidad Nacional (OVSICORI-UNA) has been recording and registering the seismicity in Costa Rica. Before September 2012, the earthquakes registered by this seismic network in northwestern Costa Rica were moderate to small, except the Cóbano earthquake of March 25, 1990, 13:23, Mw 7.3, lat. 9.648, long. 84.913, depth 20 km; a subduction quake at the entrance of the Gulf of Nicoya and generated peak intensities in the range of MM = VIII near the epicentral area and VI-VII in the Central Valley of Costa Rica. Six years before the installation of the seismic network, OVSICORI-UNA registered two subduction earthquakes in northwestern Costa Rica, specifically on August 23, 1978, at 00:38:32 and 00:50:29 with magnitudes Mw 7.0 (HRVD), Ms 7.0 (ISC) and depths of 58 and 69 km, respectively (EHB Bulletin). On September 5, 2012, at 14:42:02.8 UTC, the seismic network OVSICORI-UNA registered another large subduction earthquake in Nicoya peninsula, northwestern Costa Rica, located 29 km south of Samara, with a depth of 21 km and magnitude Mw 7.6, lat. 9.6392, long. 85.6167. This earthquake was caused by the subduction of the Cocos plate under the Caribbean plate in northwestern Costa Rica. This earthquake was felt throughout the country and also in much of Nicaragua. The instrumental intensity map for the Nicoya earthquake indicates that the earthquake was felt with an intensity of VII-VIII in the Puntarenas and Nicoya Peninsulas, in an area between Liberia, Cañas, Puntarenas, Cabo Blanco, Carrillo, Garza, Sardinal, and Tamarindo in Guanacaste; Nicoya city being the place where the maximum reported intensity of VIII is most notable. An intensity of VIII indicates that damage estimates are moderate to severe, and intensity VII indicates that damage estimates are moderate. According to the National Emergency Commission of Costa Rica, 371 affected communities were reported; most

  12. 3-D Dynamic rupture simulation for the 2016 Kumamoto, Japan, earthquake sequence: Foreshocks and M6 dynamically triggered event

    Science.gov (United States)

    Ando, R.; Aoki, Y.; Uchide, T.; Imanishi, K.; Matsumoto, S.; Nishimura, T.

    2016-12-01

    A couple of interesting earthquake rupture phenomena were observed associated with the sequence of the 2016 Kumamoto, Japan, earthquake sequence. The sequence includes the April 15, 2016, Mw 7.0, mainshock, which was preceded by multiple M6-class foreshock. The mainshock mainly broke the Futagawa fault segment striking NE-SW direction extending over 50km, and it further triggered a M6-class earthquake beyond the distance more than 50km to the northeast (Uchide et al., 2016, submitted), where an active volcano is situated. Compiling the data of seismic analysis and InSAR, we presumed this dynamic triggering event occurred on an active fault known as Yufuin fault (Ando et al., 2016, JPGU general assembly). It is also reported that the coseismic slip was significantly large at a shallow portion of Futagawa Fault near Aso volcano. Since the seismogenic depth becomes significantly shallower in these two areas, we presume the geothermal anomaly play a role as well as the elasto-dynamic processes associated with the coseismic rupture. In this study, we conducted a set of fully dynamic simulations of the earthquake rupture process by assuming the inferred 3D fault geometry and the regional stress field obtained referring the stress tensor inversion. As a result, we showed that the dynamic rupture process was mainly controlled by the irregularity of the fault geometry subjected to the gently varying regional stress field. The foreshocks ruptures have been arrested at the juncture of the branch faults. We also show that the dynamic triggering of M-6 class earthquakes occurred along the Yufuin fault segment (located 50 km NE) because of the strong stress transient up to a few hundreds of kPa due to the rupture directivity effect of the M-7 event. It is also shown that the geothermal condition may lead to the susceptible condition of the dynamic triggering by considering the plastic shear zone on the down dip extension of the Yufuin segment, situated in the vicinity of an

  13. Simbrain 3.0: A flexible, visually-oriented neural network simulator.

    Science.gov (United States)

    Tosi, Zachary; Yoshimi, Jeffrey

    2016-11-01

    Simbrain 3.0 is a software package for neural network design and analysis, which emphasizes flexibility (arbitrarily complex networks can be built using a suite of basic components) and a visually rich, intuitive interface. These features support both students and professionals. Students can study all of the major classes of neural networks in a familiar graphical setting, and can easily modify simulations, experimenting with networks and immediately seeing the results of their interventions. With the 3.0 release, Simbrain supports models on the order of thousands of neurons and a million synapses. This allows the same features that support education to support research professionals, who can now use the tool to quickly design, run, and analyze the behavior of large, highly customizable simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Environmentally Friendly Solution to Ground Hazards in Design of Bridges in Earthquake Prone Areas Using Timber Piles

    Science.gov (United States)

    Sadeghi, H.

    2015-12-01

    Bridges are major elements of infrastructure in all societies. Their safety and continued serviceability guaranties the transportation and emergency access in urban and rural areas. However, these important structures are subject to earthquake induced damages in structure and foundations. The basic approach to the proper support of foundations are a) distribution of imposed loads to foundation in a way they can resist those loads without excessive settlement and failure; b) modification of foundation ground with various available methods; and c) combination of "a" and "b". The engineers has to face the task of designing the foundations meeting all safely and serviceability criteria but sometimes when there are numerous environmental and financial constrains, the use of some traditional methods become inevitable. This paper explains the application of timber piles to improve ground resistance to liquefaction and to secure the abutments of short to medium length bridges in an earthquake/liquefaction prone area in Bohol Island, Philippines. The limitations of using the common ground improvement methods (i.e., injection, dynamic compaction) because of either environmental or financial concerns along with the abundance of timber in the area made the engineers to use a network of timber piles behind the backwalls of the bridge abutments. The suggested timber pile network is simulated by numerical methods and its safety is examined. The results show that the compaction caused by driving of the piles and bearing capacity provided by timbers reduce the settlement and lateral movements due to service and earthquake induced loads.

  15. Modelisation et simulation d'un PON (Passive Optical Network) base ...

    African Journals Online (AJOL)

    English Title: Modeling and simulation of a PON (Passive Optical Network) Based on hybrid technology WDM/TDM. English Abstract. This development is part of dynamism of design for a model combining WDM and TDM multiplexing in the optical network of PON (Passive Optical Network) type, in order to satisfy the high bit ...

  16. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  17. Housing Damage Following Earthquake

    Science.gov (United States)

    1989-01-01

    An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.

  18. The Quake-Catcher Network: Bringing Seismology to Homes and Schools

    Science.gov (United States)

    Lawrence, J. F.; Cochran, E. S.; Christensen, C. M.; Saltzman, J.; Taber, J.; Hubenthal, M.

    2011-12-01

    The Quake-Catcher Network (QCN) is a collaborative initiative for developing the world's largest, low-cost strong-motion seismic network by utilizing sensors in and attached to volunteer internet-connected computers. QCN is not only a research tool, but provides an educational tool for teaching earthquake science in formal and informal environments. A central mission of the Quake-Catcher Network is to provide scientific educational software and hardware so that K-12 teachers, students, and the general public can better understand and participate in the science of earthquakes and earthquake hazards. With greater understanding, teachers, students, and interested individuals can share their new knowledge, resulting in continued participation in the project, and better preparation for earthquakes in their homes, businesses, and communities. The primary educational outreach goals are 1) to present earthquake science and earthquake hazards in a modern and exciting way, and 2) to provide teachers and educators with seismic sensors, interactive software, and educational modules to assist in earthquake education. QCNLive (our interactive educational computer software) displays recent and historic earthquake locations and 3-axis real-time acceleration measurements. This tool is useful for demonstrations and active engagement for all ages, from K-college. QCN provides subsidized sensors at 49 for the general public and 5 for K-12 teachers. With your help, the Quake-Catcher Network can provide better understanding of earthquakes to a broader audience. Academics are taking QCN to classrooms across the United States and around the world. The next time you visit a K-12 classroom or teach a college class on interpreting seismograms, bring a QCN sensor and QCNLive software with you! To learn how, visit http://qcn.stanford.edu.

  19. Aftershock stress analysis of the April 2015 Mw 7.8 Gorkha earthquake from the NAMASTE project

    Science.gov (United States)

    Pant, M.; Velasco, A. A.; Karplus, M. S.; Patlan, E.; Ghosh, A.; Nabelek, J.; Kuna, V. M.; Sapkota, S. N.; Adhikari, L. B.; Klemperer, S. L.

    2016-12-01

    Continental collision between the Indian plate and the Eurasian plate, converging at 45 mm/yr, has uplifted the northern part of Nepal forming the Himalaya. Because of this convergence, the region has experienced large, devastating earthquakes, including the 1934 Mw 8.4 Nepal-Bihar earthquake and two recent earthquakes on April 25, 2015 Mw 7.8 (Gorkha earthquake) and May 12, 2015 Mw 7.2. These quakes killed thousands of people and caused billion dollars of property loss. Despite some recent geologic and geophysical studies of this area, many tectonic questions remain unanswered. Shortly after the Gorkha earthquake, we deployed a seismic network, NAMASTE (Nepal Array Measuring Aftershock Seismicity Trailing Earthquake), to study the aftershocks of these two large events. Our network included 45 different seismic stations (16 short period, 25 broadband, and 4 strong motion sensors) that spanned the Gorkha rupture area. The deployment extends from south of the Main Frontal Thrust (MFT) to the Main Central Thrust region (MCT), and it to recorded aftershocks for more than ten months from June 2015 to May 2016. We are leveraging high-precision earthquake locations by measuring and picking P-wave first-motion arrival polarity to develop a catalog of focal mechanisms for the larger aftershocks. We will use this catalog to correlate the seismicity and stress related of the Indo-Eurasian plate margin, hoping to address questions regarding the complex fault geometries and future earthquake hazards at this plate margin.

  20. Site Effects Study In Athens (greece) Using The 7th September 1999 Earthquake Aftershock Sequence

    Science.gov (United States)

    Serpetsidaki, A.; Sokos, E.

    On 7 September 1999 at 11:56:50 GMT, an earthquake of Mw=5.9 occurred at Athens capital of Greece. The epicenter was located in the Northwest area of Parnitha Moun- tain at 18km distance from the city centre. This earthquake was one of the most de- structive in Greece during the modern times. The intensity of the earthquake reached IX in the Northwest territories of the city and caused the death of 143 people and seri- ous structural damage in many buildings. On the 13th of September the Seismological Laboratory of Patras University, installed a seismic network of 30 stations in order to observe the evolution of the aftershock sequence. This temporary seismic network remained in the area of Attika for 50 days and recorded a significant part of the af- tershock sequence. In this paper we use the high quality recordings of this network to investigate the influence of the surface geology to the seismic motion, on sites within the epicentral area, which suffered the most during this earthquake. We applied the horizontal-to-vertical (H/V) spectral ratio method on noise and on earthquake records and the obtained results exhibit very good agreement. Finally we compare the results with the geological conditions of the study area and the damage distribution. Most of the obtained amplification levels were low with an exemption in the site of Ano Liosia were a significant amount of damage was observed and the results indicate that the earthquake motion was amplified four times. Based on the above we conclude that the damages in the city of Athens were due to source effects rather than site effects.

  1. RICHTER: A Smartphone Application for Rapid Collection of Geo-Tagged Pictures of Earthquake Damage

    Science.gov (United States)

    Skinnemoen, H.; Bossu, R.; Furuheim, K.; Bjorgo, E.

    2010-12-01

    RICHTER (Rapid geo-Images for Collaborative Help Targeting Earthquake Response) is a smartphone version of a professional application developed to provide high quality geo-tagged image communication over challenging network links, such as satellites and poor mobile links. Developed for Android mobile phones, it allows eyewitnesses to share their pictures of earthquake damage easily and without cost with the Euro-Mediterranean Seismological Centre (EMSC). The goal is to engage citizens in the collection of the most up-to-date visual information on local damage for improved rapid impact assessment. RICHTER integrates the innovative and award winning ASIGN protocol initially developed for satellite communication between cameras / computers / satcom terminals and servers at HQ. ASIGN is a robust and optimal image and video communication management solution for bandwidth-limited communication networks which was developed for use particularly in emergency and disaster situations. Contrary to a simple Multimedia Messaging System (MMS), RICHTER allows access to high definition images with embedded location information. Location is automatically assigned from either the internal GPS, derived from the mobile network (triangulation) or the current Wi-Fi domain, in that order, as this corresponds to the expected positioning accuracy. Pictures are compressed to 20-30KB of data typically for fast transfer and to avoid network overload. Full size images can be requested by the EMSC either fully automatically, or on a case-by-case basis, depending on the user preferences. ASIGN was initially developed in coordination with INMARSAT and the European Space Agency. It was used by the Rapid Mapping Unit of the United Nations notably for the damage assessment of the January 12, 2010 Haiti earthquake where more than 700 photos were collected. RICHTER will be freely distributed on the EMSC website to eyewitnesses in the event of significantly damaging earthquakes. The EMSC is the second

  2. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya’an Earthquake

    Science.gov (United States)

    LI, Zhichao; CHEN, Yao; SUO, Liming

    2015-01-01

    Abstract Background In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants’ therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. Methods This paper described a field study in an earthquake-stricken area of Ya’an. A set of 3-stage follow-up data was obtained concerning with the villagers’ participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. Results First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. Conclusion This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers’ social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters. PMID:26060778

  3. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya'an Earthquake.

    Science.gov (United States)

    Li, Zhichao; Chen, Yao; Suo, Liming

    2015-01-01

    In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants' therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. This paper described a field study in an earthquake-stricken area of Ya'an. A set of 3-stage follow-up data was obtained concerning with the villagers' participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers' social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters.

  4. Identification of strong earthquake ground motion by using pattern recognition

    International Nuclear Information System (INIS)

    Suzuki, Kohei; Tozawa, Shoji; Temmyo, Yoshiharu.

    1983-01-01

    The method of grasping adequately the technological features of complex waveform of earthquake ground motion and utilizing them as the input to structural systems has been proposed by many researchers, and the method of making artificial earthquake waves to be used for the aseismatic design of nuclear facilities has not been established in the unified form. In this research, earthquake ground motion was treated as an irregular process with unsteady amplitude and frequency, and the running power spectral density was expressed as a dark and light image on a plane of the orthogonal coordinate system with both time and frequency axes. The method of classifying this image into a number of technologically important categories by pattern recognition was proposed. This method is based on the concept called compound similarity method in the image technology, entirely different from voice diagnosis, and it has the feature that the result of identification can be quantitatively evaluated by the analysis of correlation of spatial images. Next, the standard pattern model of the simulated running power spectral density corresponding to the representative classification categories was proposed. Finally, the method of making unsteady simulated earthquake motion was shown. (Kako, I.)

  5. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  6. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  7. Waveform through the subducted plate under the Tokyo region in Japan observed by a ultra-dense seismic network (MeSO-net) and seismic activity around mega-thrust earthquakes area

    Science.gov (United States)

    Sakai, S.; Kasahara, K.; Nanjo, K.; Nakagawa, S.; Tsuruoka, H.; Morita, Y.; Kato, A.; Iidaka, T.; Hirata, N.; Tanada, T.; Obara, K.; Sekine, S.; Kurashimo, E.

    2009-12-01

    In central Japan, the Philippine Sea plate (PSP) subducts beneath the Tokyo Metropolitan area, the Kanto region, where it causes mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater earthquake in this region at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates the next great earthquake will cause 11,000 fatalities and 112 trillion yen (1 trillion US$) economic loss. This great earthquake is evaluated to occur with a probability of 70 % in 30 years by the Earthquake Research Committee of Japan. We had started the Special Project for Earthquake Disaster Mitigation in Tokyo Metropolitan area (2007-2012). Under this project, the construction of the Metropolitan Seismic Observation network (MeSO-net) that consists of about 400 observation sites was started [Kasahara et al., 2008; Nakagawa et al., 2008]. Now, we had 178 observation sites. The correlation of the wave is high because the observation point is deployed at about 2 km intervals, and the identification of the later phase is recognized easily thought artificial noise is very large. We also discuss the relation between a deformation of PSP and intra-plate M7+ earthquakes: the PSP is subducting beneath the Honshu arc and also colliding with the Pacific plate. The subduction and collision both contribute active seismicity in the Kanto region. We are going to present a high resolution tomographic image to show low velocity zone which suggests a possible internal failure of the plate; a source region of the M7+ intra-plate earthquake. Our study will contribute a new assessment of the seismic hazard at the Metropolitan area in Japan. Acknowledgement: This study was supported by the Earthquake Research Institute cooperative research program.

  8. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  9. Earthquake Early Warning: A Prospective User's Perspective (Invited)

    Science.gov (United States)

    Nishenko, S. P.; Savage, W. U.; Johnson, T.

    2009-12-01

    With more than 25 million people at risk from high hazard faults in California alone, Earthquake Early Warning (EEW) presents a promising public safety and emergency response tool. EEW represents the real-time end of an earthquake information spectrum which also includes near real-time notifications of earthquake location, magnitude, and shaking levels; as well as geographic information system (GIS)-based products for compiling and visually displaying processed earthquake data such as ShakeMap and ShakeCast. Improvements to and increased multi-national implementation of EEW have stimulated interest in how such information products could be used in the future. Lifeline organizations, consisting of utilities and transportation systems, can use both onsite and regional EEW information as part of their risk management and public safety programs. Regional EEW information can provide improved situational awareness to system operators before automatic system protection devices activate, and allow trained personnel to take precautionary measures. On-site EEW is used for earthquake-actuated automatic gas shutoff valves, triggered garage door openers at fire stations, system controls, etc. While there is no public policy framework for preemptive, precautionary electricity or gas service shutdowns by utilities in the United States, gas shut-off devices are being required at the building owner level by some local governments. In the transportation sector, high-speed rail systems have already demonstrated the ‘proof of concept’ for EEW in several countries, and more EEW systems are being installed. Recently the Bay Area Rapid Transit District (BART) began collaborating with the California Integrated Seismic Network (CISN) and others to assess the potential benefits of EEW technology to mass transit operations and emergency response in the San Francisco Bay region. A key issue in this assessment is that significant earthquakes are likely to occur close to or within the BART

  10. Earthquake ground-motion in presence of source and medium heterogeneities

    KAUST Repository

    Vyas, Jagdish Chandra

    2017-01-01

    -motion variability associated with unilateral ruptures based on ground-motion simulations of the MW 7.3 1992 Landers earthquake, eight simplified source models, and a MW 7.8 rupture simulation (ShakeOut) for the San Andreas fault. Our numerical modeling reveals

  11. Simulation of nonlinear random vibrations using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Paez, T.L.; Tucker, S.; O`Gorman, C.

    1997-02-01

    The simulation of mechanical system random vibrations is important in structural dynamics, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks provide a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and random vibration simulation, via component modeling with artificial neural networks, a much simpler problem. A numerical example is presented.

  12. Examination of earthquake Ground Motion in the deep underground environment of Japan

    International Nuclear Information System (INIS)

    Goto, J.; Tsuchi, H.; Mashimo, M.

    2009-01-01

    Among the possible impacts of earthquakes on the geological disposal system, ground motion is not included in the criteria for selecting a candidate repository site because, in general, ground motion deep underground is considered to be smaller than at the surface. Also, after backfilling/closure, the repository moves together with the surrounding rock. We have carried out a detailed examination of earthquake ground motion deep underground using extensive data from recent observation networks to support the above assumption. As a result, it has been reconfirmed that earthquake ground motion deep underground is relatively smaller than at the surface. Through detailed analysis of data, we have identified the following important parameters for evaluating earthquake ground motion deep underground: depth and velocity distribution of the rock formations of interest, the intensity of the short period component of earthquakes and incident angle of seismic waves to the rock formations. (authors)

  13. Mass-spring model used to simulate the sloshing of fluid in the container under the earthquake

    International Nuclear Information System (INIS)

    Wen Jing; Luan Lin; Gao Xiaoan; Wang Wei; Lu Daogang; Zhang Shuangwang

    2005-01-01

    A lumped-mass spring model is given to simulated the sloshing of liquid in the container under the earthquake in the ASCE 4-86. A new mass-spring model is developed in the 3D finite element model instead of beam model in this paper. The stresses corresponding to the sloshing mass could be given directly, which avoids the construction of beam model. This paper presents 3-D Mass-Spring Model for the total overturning moment as well as an example of the model. Moreover the mass-spring models for the overturning moment to the sides and to the bottom of the container are constructed respectively. (authors)

  14. Earthquake Monitoring with the MyShake Global Smartphone Seismic Network

    Science.gov (United States)

    Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.

    2017-12-01

    Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located 20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.

  15. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  16. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  17. Brian: a simulator for spiking neural networks in python.

    Science.gov (United States)

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  18. Tsunami simulations of mega-thrust earthquakes in the Nankai–Tonankai Trough (Japan) based on stochastic rupture scenarios

    KAUST Repository

    Goda, Katsuichiro; Yasuda, Tomohiro; Mai, Paul Martin; Maruyama, Takuma; Mori, Nobuhito

    2017-01-01

    In this study, earthquake rupture models for future mega-thrust earthquakes in the Nankai–Tonankai subduction zone are developed by incorporating the main characteristics of inverted source models of the 2011 Tohoku earthquake. These scenario

  19. Deeper penetration of large earthquakes on seismically quiescent faults.

    Science.gov (United States)

    Jiang, Junle; Lapusta, Nadia

    2016-06-10

    Why many major strike-slip faults known to have had large earthquakes are silent in the interseismic period is a long-standing enigma. One would expect small earthquakes to occur at least at the bottom of the seismogenic zone, where deeper aseismic deformation concentrates loading. We suggest that the absence of such concentrated microseismicity indicates deep rupture past the seismogenic zone in previous large earthquakes. We support this conclusion with numerical simulations of fault behavior and observations of recent major events. Our modeling implies that the 1857 Fort Tejon earthquake on the San Andreas Fault in Southern California penetrated below the seismogenic zone by at least 3 to 5 kilometers. Our findings suggest that such deeper ruptures may occur on other major fault segments, potentially increasing the associated seismic hazard. Copyright © 2016, American Association for the Advancement of Science.

  20. A Crowdsourcing-based Taiwan Scientific Earthquake Reporting System

    Science.gov (United States)

    Liang, W. T.; Lee, J. C.; Lee, C. F.

    2017-12-01

    To collect immediately field observations for any earthquake-induced ground damages, such as surface fault rupture, landslide, rock fall, liquefaction, and landslide-triggered dam or lake, etc., we are developing an earthquake damage reporting system which particularly relies on school teachers as volunteers after taking a series of training courses organized by this project. This Taiwan Scientific Earthquake Reporting (TSER) system is based on the Ushahidi mapping platform, which has been widely used for crowdsourcing on different purposes. Participants may add an app-like icon for mobile devices to this website at https://ies-tser.iis.sinica.edu.tw. Right after a potential damaging earthquake occurred in the Taiwan area, trained volunteers will be notified/dispatched to the source area to carry out field surveys and to describe the ground damages through this system. If the internet is available, they may also upload some relevant images in the field right away. This collected information will be shared with all public after a quick screen by the on-duty scientists. To prepare for the next strong earthquake, we set up a specific project on TSER for sharing spectacular/remarkable geologic features wherever possible. This is to help volunteers get used to this system and share any teachable material on this platform. This experimental, science-oriented crowdsourcing system was launched early this year. Together with a DYFI-like intensity reporting system, Taiwan Quake-Catcher Network, and some online games and teaching materials, the citizen seismology has been much improved in Taiwan in the last decade. All these constructed products are now either operated or promoted at the Taiwan Earthquake Research Center (TEC). With these newly developed platforms and materials, we are aiming not only to raise the earthquake awareness and preparedness, but also to encourage public participation in earthquake science in Taiwan.