WorldWideScience

Sample records for earthquake simulation network

  1. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  2. Centrality in earthquake multiplex networks

    Science.gov (United States)

    Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.

    2018-06-01

    Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.

  3. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  4. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  5. Toward real-time regional earthquake simulation of Taiwan earthquakes

    Science.gov (United States)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  6. Earthquake evaluation of a substation network

    International Nuclear Information System (INIS)

    Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.

    1991-01-01

    The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system

  7. Metrics for comparing dynamic earthquake rupture simulations

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  8. Preferential attachment in evolutionary earthquake networks

    Science.gov (United States)

    Rezaei, Soghra; Moghaddasi, Hanieh; Darooneh, Amir Hossein

    2018-04-01

    Earthquakes as spatio-temporal complex systems have been recently studied using complex network theory. Seismic networks are dynamical networks due to addition of new seismic events over time leading to establishing new nodes and links to the network. Here we have constructed Iran and Italy seismic networks based on Hybrid Model and testified the preferential attachment hypothesis for the connection of new nodes which states that it is more probable for newly added nodes to join the highly connected nodes comparing to the less connected ones. We showed that the preferential attachment is present in the case of earthquakes network and the attachment rate has a linear relationship with node degree. We have also found the seismic passive points, the most probable points to be influenced by other seismic places, using their preferential attachment values.

  9. Earthquake correlations and networks: A comparative study

    Science.gov (United States)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  10. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  11. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  12. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    Science.gov (United States)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  13. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  14. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  15. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  16. Interactive visualization to advance earthquake simulation

    Science.gov (United States)

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  17. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  18. CAISSON: Interconnect Network Simulator

    Science.gov (United States)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  19. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    Science.gov (United States)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  20. Evaluation and optimization of seismic networks and algorithms for earthquake early warning – the case of Istanbul (Turkey)

    OpenAIRE

    Oth, Adrien; Böse, Maren; Wenzel, Friedemann; Köhler, Nina; Erdik, Mustafa

    2010-01-01

    Earthquake early warning (EEW) systems should provide reliable warnings as quickly as possible with a minimum number of false and missed alarms. Using the example of the megacity Istanbul and based on a set of simulated scenario earthquakes, we present a novel approach for evaluating and optimizing seismic networks for EEW, in particular in regions with a scarce number of instrumentally recorded earthquakes. We show that, while the current station locations of the existing Istanbul EEW system...

  1. Message network simulation

    OpenAIRE

    Shih, Kuo-Tung

    1990-01-01

    Approved for public release, distribution is unlimited This thesis presents a computer simulation of a multinode data communication network using a virtual network model to determine the effects of various system parameters on overall network performance. Lieutenant Commander, Republic of China (Taiwan) Navy

  2. Social Media as Seismic Networks for the Earthquake Damage Assessment

    Science.gov (United States)

    Meletti, C.; Cresci, S.; La Polla, M. N.; Marchetti, A.; Tesconi, M.

    2014-12-01

    The growing popularity of online platforms, based on user-generated content, is gradually creating a digital world that mirrors the physical world. In the paradigm of crowdsensing, the crowd becomes a distributed network of sensors that allows us to understand real life events at a quasi-real-time rate. The SoS-Social Sensing project [http://socialsensing.it/] exploits the opportunistic crowdsensing, involving users in the sensing process in a minimal way, for social media emergency management purposes in order to obtain a very fast, but still reliable, detection of emergency dimension to face. First of all we designed and implemented a decision support system for the detection and the damage assessment of earthquakes. Our system exploits the messages shared in real-time on Twitter. In the detection phase, data mining and natural language processing techniques are firstly adopted to select meaningful and comprehensive sets of tweets. Then we applied a burst detection algorithm in order to promptly identify outbreaking seismic events. Using georeferenced tweets and reported locality names, a rough epicentral determination is also possible. The results, compared to Italian INGV official reports, show that the system is able to detect, within seconds, events of a magnitude in the region of 3.5 with a precision of 75% and a recall of 81,82%. We then focused our attention on damage assessment phase. We investigated the possibility to exploit social media data to estimate earthquake intensity. We designed a set of predictive linear models and evaluated their ability to map the intensity of worldwide earthquakes. The models build on a dataset of almost 5 million tweets exploited to compute our earthquake features, and more than 7,000 globally distributed earthquakes data, acquired in a semi-automatic way from USGS, serving as ground truth. We extracted 45 distinct features falling into four categories: profile, tweet, time and linguistic. We run diagnostic tests and

  3. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu; Duan, Benchun; Taylor, Valerie

    2011-01-01

    , such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular

  4. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  5. US earthquake observatories: recommendations for a new national network

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    This report is the first attempt by the seismological community to rationalize and optimize the distribution of earthquake observatories across the United States. The main aim is to increase significantly our knowledge of earthquakes and the earth's dynamics by providing access to scientifically more valuable data. Other objectives are to provide a more efficient and cost-effective system of recording and distributing earthquake data and to make as uniform as possible the recording of earthquakes in all states. The central recommendation of the Panel is that the guiding concept be established of a rationalized and integrated seismograph system consisting of regional seismograph networks run for crucial regional research and monitoring purposes in tandem with a carefully designed, but sparser, nationwide network of technologically advanced observatories. Such a national system must be thought of not only in terms of instrumentation but equally in terms of data storage, computer processing, and record availability.

  6. Discrimination between earthquakes and chemical explosions using artificial neural networks

    International Nuclear Information System (INIS)

    Kundu, Ajit; Bhadauria, Y.S.; Roy, Falguni

    2012-05-01

    An Artificial Neural Network (ANN) for discriminating between earthquakes and chemical explosions located at epicentral distances, Δ <5 deg from Gauribidanur Array (GBA) has been developed using the short period digital seismograms recorded at GBA. For training the ANN spectral amplitude ratios between P and Lg phases computed at 13 different frequencies in the frequency range of 2-8 Hz, corresponding to 20 earthquakes and 23 chemical explosions were used along with other parameters like magnitude, epicentral distance and amplitude ratios Rg/P and Rg/Lg. After training and development, the ANN has correctly identified a set of 21 test events, comprising 6 earthquakes and 15 chemical explosions. (author)

  7. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  8. Testing the structure of earthquake networks from multivariate time series of successive main shocks in Greece

    Science.gov (United States)

    Chorozoglou, D.; Kugiumtzis, D.; Papadimitriou, E.

    2018-06-01

    The seismic hazard assessment in the area of Greece is attempted by studying the earthquake network structure, such as small-world and random. In this network, a node represents a seismic zone in the study area and a connection between two nodes is given by the correlation of the seismic activity of two zones. To investigate the network structure, and particularly the small-world property, the earthquake correlation network is compared with randomized ones. Simulations on multivariate time series of different length and number of variables show that for the construction of randomized networks the method randomizing the time series performs better than methods randomizing directly the original network connections. Based on the appropriate randomization method, the network approach is applied to time series of earthquakes that occurred between main shocks in the territory of Greece spanning the period 1999-2015. The characterization of networks on sliding time windows revealed that small-world structure emerges in the last time interval, shortly before the main shock.

  9. Earthquake Complex Network applied along the Chilean Subduction Zone.

    Science.gov (United States)

    Martin, F.; Pasten, D.; Comte, D.

    2017-12-01

    In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.

  10. Tsunamigenic earthquake simulations using experimentally derived friction laws

    Science.gov (United States)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  11. Urban MEMS based seismic network for post-earthquakes rapid disaster assessment

    Science.gov (United States)

    D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe

    2014-05-01

    worship. The waveforms recorded could be promptly used to determine ground-shaking parameters, like peak ground acceleration/velocity/displacement, Arias and Housner intensity, that could be all used to create, few seconds after a strong earthquakes, shaking maps at urban scale. These shaking maps could allow to quickly identify areas of the town center that have had the greatest earthquake resentment. When a strong seismic event occur, the beginning of the ground motion observed at the site could be used to predict the ensuing ground motion at the same site and so to realize a short term earthquake early warning system. The data acquired after a moderate magnitude earthquake, would provide valuable information for the detail seismic microzonation of the area based on direct earthquake shaking observations rather than from a model-based or indirect methods. In this work, we evaluate the feasibility and effectiveness of such seismic network taking in to account both technological, scientific and economic issues. For this purpose, we have simulated the creation of a MEMS based urban seismic network in a medium size city. For the selected town, taking into account the instrumental specifics, the array geometry and the environmental noise, we investigated the ability of the planned network to detect and measure earthquakes of different magnitude generated from realistic near seismogentic sources.

  12. Artificial earthquake record generation using cascade neural network

    Directory of Open Access Journals (Sweden)

    Bani-Hani Khaldoon A.

    2017-01-01

    Full Text Available This paper presents the results of using artificial neural networks (ANN in an inverse mapping problem for earthquake accelerograms generation. This study comprises of two parts: 1-D site response analysis; performed for Dubai Emirate at UAE, where eight earthquakes records are selected and spectral matching are performed to match Dubai response spectrum using SeismoMatch software. Site classification of Dubai soil is being considered for two classes C and D based on shear wave velocity of soil profiles. Amplifications factors are estimated to quantify Dubai soil effect. Dubai’s design response spectra are developed for site classes C & D according to International Buildings Code (IBC -2012. In the second part, ANN is employed to solve inverse mapping problem to generate time history earthquake record. Thirty earthquakes records and their design response spectrum with 5% damping are used to train two cascade forward backward neural networks (ANN1, ANN2. ANN1 is trained to map the design response spectrum to time history and ANN2 is trained to map time history records to the design response spectrum. Generalized time history earthquake records are generated using ANN1 for Dubai’s site classes C and D, and ANN2 is used to evaluate the performance of ANN1.

  13. Scale-free networks of earthquakes and aftershocks

    International Nuclear Information System (INIS)

    Baiesi, Marco; Paczuski, Maya

    2004-01-01

    We propose a metric to quantify correlations between earthquakes. The metric consists of a product involving the time interval and spatial distance between two events, as well as the magnitude of the first one. According to this metric, events typically are strongly correlated to only one or a few preceding ones. Thus a classification of events as foreshocks, main shocks, or aftershocks emerges automatically without imposing predetermined space-time windows. In the simplest network construction, each earthquake receives an incoming link from its most correlated predecessor. The number of aftershocks for any event, identified by its outgoing links, is found to be scale free with exponent γ=2.0(1). The original Omori law with p=1 emerges as a robust feature of seismicity, holding up to years even for aftershock sequences initiated by intermediate magnitude events. The broad distribution of distances between earthquakes and their linked aftershocks suggests that aftershock collection with fixed space windows is not appropriate

  14. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...... on an example considering a portfolio of reinforced concrete structures in a city located close to the western part of the North Anatolian Fault in Turkey....

  15. Simulation analysis of earthquake response of nuclear power plant to the 2003 Miyagi-Oki earthquake

    International Nuclear Information System (INIS)

    Yoshihiro Ogata; Kiyoshi Hirotani; Masayuki Higuchi; Shingo Nakayama

    2005-01-01

    On May 26, 2003 an earthquake of magnitude scale 7.1 (Japan Meteorological Agency) occurred just offshore of Miyagi Prefecture. This was the largest earthquake ever experienced by the nuclear power plant of Tohoku Electric Power Co. in Onagawa (hereafter the Onagawa Nuclear Power Plant) during the 19 years since it had started operations in 1984. In this report, we review the vibration characteristics of the reactor building of the Onagawa Nuclear Power Plant Unit 1 based on acceleration records observed at the building, and give an account of a simulation analysis of the earthquake response carried out to ascertain the appropriateness of design procedure and a seismic safety of the building. (authors)

  16. A decision support system for pre-earthquake planning of lifeline networks

    Energy Technology Data Exchange (ETDEWEB)

    Liang, J.W. [Tianjin Univ. (China). Dept. of Civil Engineering

    1996-12-01

    This paper describes the frame of a decision support system for pre-earthquake planning of gas and water networks. The system is mainly based on the earthquake experiences and lessons from the 1976 Tangshan earthquake. The objective of the system is to offer countermeasures and help make decisions for seismic strengthening, remaking, and upgrading of gas and water networks.

  17. Simulation of earthquakes with cellular automata

    Directory of Open Access Journals (Sweden)

    P. G. Akishin

    1998-01-01

    Full Text Available The relation between cellular automata (CA models of earthquakes and the Burridge–Knopoff (BK model is studied. It is shown that the CA proposed by P. Bak and C. Tang,although they have rather realistic power spectra, do not correspond to the BK model. We present a modification of the CA which establishes the correspondence with the BK model.An analytical method of studying the evolution of the BK-like CA is proposed. By this method a functional quadratic in stress release, which can be regarded as an analog of the event energy, is constructed. The distribution of seismic events with respect to this “energy” shows rather realistic behavior, even in two dimensions. Special attention is paid to two-dimensional automata; the physical restrictions on compression and shear stiffnesses are imposed.

  18. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  19. The wireless networking system of Earthquake precursor mobile field observation

    Science.gov (United States)

    Wang, C.; Teng, Y.; Wang, X.; Fan, X.; Wang, X.

    2012-12-01

    The mobile field observation network could be real-time, reliably record and transmit large amounts of data, strengthen the physical signal observations in specific regions and specific period, it can improve the monitoring capacity and abnormal tracking capability. According to the features of scatter everywhere, a large number of current earthquake precursor observation measuring points, networking technology is based on wireless broadband accessing McWILL system, the communication system of earthquake precursor mobile field observation would real-time, reliably transmit large amounts of data to the monitoring center from measuring points through the connection about equipment and wireless accessing system, broadband wireless access system and precursor mobile observation management center system, thereby implementing remote instrument monitoring and data transmition. At present, the earthquake precursor field mobile observation network technology has been applied to fluxgate magnetometer array geomagnetic observations of Tianzhu, Xichang,and Xinjiang, it can be real-time monitoring the working status of the observational instruments of large area laid after the last two or three years, large scale field operation. Therefore, it can get geomagnetic field data of the local refinement regions and provide high-quality observational data for impending earthquake tracking forecast. Although, wireless networking technology is very suitable for mobile field observation with the features of simple, flexible networking etc, it also has the phenomenon of packet loss etc when transmitting a large number of observational data due to the wireless relatively weak signal and narrow bandwidth. In view of high sampling rate instruments, this project uses data compression and effectively solves the problem of data transmission packet loss; Control commands, status data and observational data transmission use different priorities and means, which control the packet loss rate within

  20. Dense Ocean Floor Network for Earthquakes and Tsunamis; DONET/ DONET2, Part2 -Development and data application for the mega thrust earthquakes around the Nankai trough-

    Science.gov (United States)

    Kaneda, Y.; Kawaguchi, K.; Araki, E.; Matsumoto, H.; Nakamura, T.; Nakano, M.; Kamiya, S.; Ariyoshi, K.; Baba, T.; Ohori, M.; Hori, T.; Takahashi, N.; Kaneko, S.; Donet Research; Development Group

    2010-12-01

    Yoshiyuki Kaneda Katsuyoshi Kawaguchi*, Eiichiro Araki*, Shou Kaneko*, Hiroyuki Matsumoto*, Takeshi Nakamura*, Masaru Nakano*, Shinichirou Kamiya*, Keisuke Ariyoshi*, Toshitaka Baba*, Michihiro Ohori*, Narumi Takakahashi*, and Takane Hori** * Earthquake and Tsunami Research Project for Disaster Prevention, Leading Project , Japan Agency for Marine-Earth Science and Technology (JAMSTEC) **Institute for Research on Earth Evolution, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) DONET (Dense Ocean Floor Network for Earthquakes and Tsunamis) is the real time monitoring system of the Tonankai seismogenic zones around the Nankai trough southwestern Japan. We were starting to develop DONET to perform real time monitoring of crustal activities over there and the advanced early warning system. DONET will provide important and useful data to understand the Nankai trough maga thrust earthquake seismogenic zones and to improve the accuracy of the earthquake recurrence cycle simulation. Details of DONET concept are as follows. 1) Redundancy, Extendable function and advanced maintenance system using the looped cable system, junction boxes and the ROV/AUV. DONET has 20 observatories and incorporated in a double land stations concept. Also, we are developed ROV for the 10km cable extensions and heavy weight operations. 2) Multi kinds of sensors to observe broad band phenomena such as long period tremors, very low frequency earthquakes and strong motions of mega thrust earthquakes over M8: Therefore, sensors such as a broadband seismometer, an accelerometer, a hydrophone, a precise pressure gauge, a differential pressure gauge and a thermometer are equipped with each observatory in DONET. 3) For speedy detections, evaluations and notifications of earthquakes and tsunamis: DONET system will be deployed around the Tonankai seismogenic zone. 4) Provide data of ocean floor crustal deformations derived from pressure sensors: Simultaneously, the development of data

  1. Earthquake simulation, actual earthquake monitoring and analytical methods for soil-structure interaction investigation

    Energy Technology Data Exchange (ETDEWEB)

    Tang, H T [Seismic Center, Electric Power Research Institute, Palo Alto, CA (United States)

    1988-07-01

    Approaches for conducting in-situ soil-structure interaction experiments are discussed. High explosives detonated under the ground can generate strong ground motion to induce soil-structure interaction (SSI). The explosive induced data are useful in studying the dynamic characteristics of the soil-structure system associated with the inertial aspect of the SSI problem. The plane waves generated by the explosives cannot adequately address the kinematic interaction associated with actual earthquakes because of he difference in wave fields and their effects. Earthquake monitoring is ideal for obtaining SSI data that can address all aspects of the SSI problem. The only limitation is the level of excitation that can be obtained. Neither the simulated earthquake experiments nor the earthquake monitoring experiments can have exact similitude if reduced scale test structures are used. If gravity effects are small, reasonable correlations between the scaled model and the prototype can be obtained provided that input motion can be scaled appropriately. The key product of the in-situ experiments is the data base that can be used to qualify analytical methods for prototypical applications. (author)

  2. Dynamic fracture network around faults: implications for earthquake ruptures, ground motion and energy budget

    Science.gov (United States)

    Okubo, K.; Bhat, H. S.; Rougier, E.; Lei, Z.; Knight, E. E.; Klinger, Y.

    2017-12-01

    Numerous studies have suggested that spontaneous earthquake ruptures can dynamically induce failure in secondary fracture network, regarded as damage zone around faults. The feedbacks of such fracture network play a crucial role in earthquake rupture, its radiated wave field and the total energy budget. A novel numerical modeling tool based on the combined finite-discrete element method (FDEM), which accounts for the main rupture propagation and nucleation/propagation of secondary cracks, was used to quantify the evolution of the fracture network and evaluate its effects on the main rupture and its associated radiation. The simulations were performed with the FDEM-based software tool, Hybrid Optimization Software Suite (HOSSedu) developed by Los Alamos National Laboratory. We first modeled an earthquake rupture on a planar strike-slip fault surrounded by a brittle medium where secondary cracks can be nucleated/activated by the earthquake rupture. We show that the secondary cracks are dynamically generated dominantly on the extensional side of the fault, mainly behind the rupture front, and it forms an intricate network of fractures in the damage zone. The rupture velocity thereby significantly decreases, by 10 to 20 percent, while the supershear transition length increases in comparison to the one with purely elastic medium. It is also observed that the high-frequency component (10 to 100 Hz) of the near-field ground acceleration is enhanced by the dynamically activated fracture network, consistent with field observations. We then conducted the case study in depth with various sets of initial stress state, and friction properties, to investigate the evolution of damage zone. We show that the width of damage zone decreases in depth, forming "flower-like" structure as the characteristic slip distance in linear slip-weakening law, or the fracture energy on the fault, is kept constant with depth. Finally, we compared the fracture energy on the fault to the energy

  3. A geographical and multi-criteria vulnerability assessment of transportation networks against extreme earthquakes

    International Nuclear Information System (INIS)

    Kermanshah, A.; Derrible, S.

    2016-01-01

    The purpose of this study is to provide a geographical and multi-criteria vulnerability assessment method to quantify the impacts of extreme earthquakes on road networks. The method is applied to two US cities, Los Angeles and San Francisco, both of which are susceptible to severe seismic activities. Aided by the recent proliferation of data and the wide adoption of Geography Information Systems (GIS), we use a data-driven approach using USGS ShakeMaps to determine vulnerable locations in road networks. To simulate the extreme earthquake, we remove road sections within “very strong” intensities provided by USGS. Subsequently, we measure vulnerability as a percentage drop in four families of metrics: overall properties (length of remaining system); topological indicators (betweenness centrality); accessibility; and travel demand using Longitudinal Employment Household Dynamics (LEHD) data. The various metrics are then plotted on a Vulnerability Surface (VS), from which the area can be assimilated to an overall vulnerability indicator. This VS approach offers a simple and pertinent method to capture the impacts of extreme earthquake. It can also be useful to planners to assess the robustness of various alternative scenarios in their plans to ensure that cities located in seismic areas are better prepared to face severe earthquakes. - Highlights: • Developed geographical and multi-criteria vulnerability assessment method. • Quantify the impacts of extreme earthquakes on transportation networks. • Data-driven approach using USGS ShakeMaps to determine vulnerable locations. • Measure vulnerability as a percentage drop in four families of metrics: ○Overall properties. ○Topological indicators. ○Accessibility. ○Travel demand using Longitudinal Employment Household Dynamics (LEHD) data. • Developed Vulnerability Surface (VS), a new pragmatic vulnerability indicator.

  4. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    Science.gov (United States)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  5. Insights into earthquake hazard map performance from shaking history simulations

    Science.gov (United States)

    Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.

    2017-12-01

    Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher

  6. On the reliability of Quake-Catcher Network earthquake detections

    Science.gov (United States)

    Yildirim, Battalgazi; Cochran, Elizabeth S.; Chung, Angela I.; Christensen, Carl M.; Lawrence, Jesse F.

    2015-01-01

    Over the past two decades, there have been several initiatives to create volunteer‐based seismic networks. The Personal Seismic Network, proposed around 1990, used a short‐period seismograph to record earthquake waveforms using existing phone lines (Cranswick and Banfill, 1990; Cranswicket al., 1993). NetQuakes (Luetgert et al., 2010) deploys triaxial Micro‐Electromechanical Systems (MEMS) sensors in private homes, businesses, and public buildings where there is an Internet connection. Other seismic networks using a dense array of low‐cost MEMS sensors are the Community Seismic Network (Clayton et al., 2012; Kohler et al., 2013) and the Home Seismometer Network (Horiuchi et al., 2009). One main advantage of combining low‐cost MEMS sensors and existing Internet connection in public and private buildings over the traditional networks is the reduction in installation and maintenance costs (Koide et al., 2006). In doing so, it is possible to create a dense seismic network for a fraction of the cost of traditional seismic networks (D’Alessandro and D’Anna, 2013; D’Alessandro, 2014; D’Alessandro et al., 2014).

  7. Assessing earthquake early warning using sparse networks in developing countries: Case study of the Kyrgyz Republic

    Science.gov (United States)

    Parolai, Stefano; Boxberger, Tobias; Pilz, Marco; Fleming, Kevin; Haas, Michael; Pittore, Massimiliano; Petrovic, Bojana; Moldobekov, Bolot; Zubovich, Alexander; Lauterjung, Joern

    2017-09-01

    The first real-time digital strong-motion network in Central Asia has been installed in the Kyrgyz Republic since 2014. Although this network consists of only 19 strong-motion stations, they are located in near-optimal locations for earthquake early warning and rapid response purposes. In fact, it is expected that this network, which utilizes the GFZ-Sentry software, allowing decentralized event assessment calculations, not only will provide useful strong motion data useful for improving future seismic hazard and risk assessment, but will serve as the backbone for regional and on-site earthquake early warning operations. Based on the location of these stations, and travel-time estimates for P- and S-waves, we have determined potential lead times for several major urban areas in Kyrgyzstan (i.e., Bishkek, Osh, and Karakol) and Kazakhstan (Almaty), where we find the implementation of an efficient earthquake early warning system would provide lead times outside the blind zone ranging from several seconds up to several tens of seconds. This was confirmed by the simulation of the possible shaking (and intensity) that would arise considering a series of scenarios based on historical and expected events, and how they affect the major urban centres. Such lead times would allow the instigation of automatic mitigation procedures, while the system as a whole would support prompt and efficient actions to be undertaken over large areas.

  8. Simulation of scenario earthquake influenced field by using GIS

    Science.gov (United States)

    Zuo, Hui-Qiang; Xie, Li-Li; Borcherdt, R. D.

    1999-07-01

    The method for estimating the site effect on ground motion specified by Borcherdt (1994a, 1994b) is briefly introduced in the paper. This method and the detail geological data and site classification data in San Francisco bay area of California, the United States, are applied to simulate the influenced field of scenario earthquake by GIS technology, and the software for simulating has been drawn up. The paper is a partial result of cooperative research project between China Seismological Bureau and US Geological Survey.

  9. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  10. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  11. Bitcoin network simulator data explotation

    OpenAIRE

    Berini Sarrias, Martí

    2015-01-01

    This project starts with a brief introduction to the concepts of Bitcoin and blockchain, followed by the description of the di erent known attacks to the Bitcoin network. Once reached this point, the basic structure of the Bitcoin network simulator is presented. The main objective of this project is to help in the security assessment of the Bitcoin network. To accomplish that, we try to identify useful metrics, explain them and implement them in the corresponding simulator modules, aiming to ...

  12. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  13. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  14. Natural gas network resiliency to a "shakeout scenario" earthquake.

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James F.; Corbet, Thomas Frank,; Brooks, Robert E.

    2013-06-01

    A natural gas network model was used to assess the likely impact of a scenario San Andreas Fault earthquake on the natural gas network. Two disruption scenarios were examined. The more extensive damage scenario assumes the disruption of all three major corridors bringing gas into southern California. If withdrawals from the Aliso Canyon storage facility are limited to keep the amount of stored gas within historical levels, the disruption reduces Los Angeles Basin gas supplies by 50%. If Aliso Canyon withdrawals are only constrained by the physical capacity of the storage system to withdraw gas, the shortfall is reduced to 25%. This result suggests that it is important for stakeholders to put agreements in place facilitating the withdrawal of Aliso Canyon gas in the event of an emergency.

  15. MyShake: A smartphone seismic network for earthquake early warning and beyond.

    Science.gov (United States)

    Kong, Qingkai; Allen, Richard M; Schreier, Louis; Kwon, Young-Woo

    2016-02-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.

  16. Broadband Ground Motion Observation and Simulation for the 2016 Kumamoto Earthquake

    Science.gov (United States)

    Miyake, H.; Chimoto, K.; Yamanaka, H.; Tsuno, S.; Korenaga, M.; Yamada, N.; Matsushima, T.; Miyakawa, K.

    2016-12-01

    During the 2016 Kumamoto earthquake, strong motion data were widely recorded by the permanent dense triggered strong motion network of K-NET/KiK-net and seismic intensity meters installed by local government and JMA. Seismic intensities close to the MMI 9-10 are recorded twice at the Mashiki town, and once at the Nishihara village and KiK-net Mashiki (KMMH16 ground surface). Near-fault records indicate extreme ground motion exceeding 400 cm/s in 5% pSv at a period of 1 s for the Mashiki town and 3-4 s for the Nishihara village. Fault parallel velocity components are larger between the Mashiki town and the Nishihara village, on the other hand, fault normal velocity components are larger inside the caldera of the Aso volcano. The former indicates rupture passed through along-strike stations, and the latter stations located at the forward rupture direction (e.g., Miyatake, 1999). In addition to the permanent observation, temporary continuous strong motion stations were installed just after the earthquake in the Kumamoto city, Mashiki town, Nishihara village, Minami-Aso village, and Aso town, (e.g., Chimoto et al., 2016; Tsuno et al., 2016; Yamanaka et al. 2016). This study performs to estimate strong motion generation areas for the 2016 Kumamoto earthquake sequence using the empirical Green's function method, then to simulate broadband ground motions for both the permanent and temporary strong motion stations. Currently the target period range is between 0.1 s to 5-10 s due to the signal-to-noise ratio of element earthquakes used for the empirical Green's functions. We also care fault dimension parameters N within 4 to 10 to avoid spectral sags and artificial periodicity. The simulated seismic intensities as well as fault normal and parallel velocity components will be discussed.

  17. Wireless network simulation - Your window on future network performance

    NARCIS (Netherlands)

    Fledderus, E.

    2005-01-01

    The paper describes three relevant perspectives on current wireless simulation practices. In order to obtain the key challenges for future network simulations, the characteristics of "beyond 3G" networks are described, including their impact on simulation.

  18. Earthquake simulations with time-dependent nucleation and long-range interactions

    Directory of Open Access Journals (Sweden)

    J. H. Dieterich

    1995-01-01

    Full Text Available A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.

  19. The ShakeOut earthquake source and ground motion simulations

    Science.gov (United States)

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  20. Hybrid Simulations of the Broadband Ground Motions for the 2008 MS8.0 Wenchuan, China, Earthquake

    Science.gov (United States)

    Yu, X.; Zhang, W.

    2012-12-01

    The Ms8.0 Wenchuan earthquake occurred on 12 May 2008 at 14:28 Beijing Time. It is the largest event happened in the mainland of China since the 1976, Mw7.6, Tangshan earthquake. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and "quake lakes" which formed by landslide-induced reservoirs. These resulted in tremendous losses of life and property. Casualties numbered more than 80,000 people, and there were major economic losses. However, this earthquake is the first Ms 8 intraplate earthquake with good close fault strong motion coverage. Over four hundred strong motion stations of the National Strong Motion Observation Network System (NSMONS) recorded the mainshock. Twelve of them located within 20 km of the fault traces and another 33 stations located within 100 km. These observations, accompanying with the hundreds of GPS vectors and multiple ALOS INSAR images, provide an unprecedented opportunity to study the rupture process of such a great intraplate earthquake. In this study, we calculate broadband near-field ground motion synthetic waveforms of this great earthquake using a hybrid broadband ground-motion simulation methodology, which combines a deterministic approach at low frequencies (f < 1.0 Hz) with a theoretic Green's function calculation approach at high frequency ( ~ 10.0 Hz). The fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time that were obtained by an inversion kinematic source model. At the same time, based on the aftershock data, we analyze the site effects for the near-field stations. Frequency-dependent site-amplification values for each station are calculated using genetic algorithms. For the calculation of the synthetic waveforms, at first, we carry out simulations using the hybrid methodology for the frequency up to 10.0 Hz. Then, we consider for

  1. Connection with seismic networks and construction of real time earthquake monitoring system

    International Nuclear Information System (INIS)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S.

    2000-12-01

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system

  2. Damage Level Prediction of Reinforced Concrete Building Based on Earthquake Time History Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Suryanita Reni

    2017-01-01

    Full Text Available The strong motion earthquake could cause the building damage in case of the building not considered in the earthquake design of the building. The study aims to predict the damage-level of building due to earthquake using Artificial Neural Networks method. The building model is a reinforced concrete building with ten floors and height between floors is 3.6 m. The model building received a load of the earthquake based on nine earthquake time history records. Each time history scaled to 0,5g, 0,75g, and 1,0g. The Artificial Neural Networks are designed in 4 architectural models using the MATLAB program. Model 1 used the displacement, velocity, and acceleration as input and Model 2 used the displacement only as the input. Model 3 used the velocity as input, and Model 4 used the acceleration just as input. The output of the Neural Networks is the damage level of the building with the category of Safe (1, Immediate Occupancy (2, Life Safety (3 or in a condition of Collapse Prevention (4. According to the results, Neural Network models have the prediction rate of the damage level between 85%-95%. Therefore, one of the solutions for analyzing the structural responses and the damage level promptly and efficiently when the earthquake occurred is by using Artificial Neural Network

  3. A study on generation of simulated earthquake ground motion for seismic design of nuclear power plant

    International Nuclear Information System (INIS)

    Ichiki, Tadaharu; Matsumoto, Takuji; Kitada, Yoshio; Osaki, Yorihiko; Kanda, Jun; Masao, Toru.

    1985-01-01

    The aseismatic design of nuclear power generation facilities carried out in Japan at present must conform to the ''Guideline for aseismatic design examination regarding power reactor facilities'' decided by the Atomic Energy Commission in 1978. In this guideline, the earthquake motion used for the analysis of dynamic earthquake response is to be given in the form of the magnitude determined on the basis of the investigation of historical earthquakes and active faults around construction sites and the response spectra corresponding to the distance from epicenters. Accordingly when the analysis of dynamic earthquake response is actually carried out, the simulated earthquake motion made in conformity with these set up response spectra is used as the input earthquake motion for the design. For the purpose of establishing the techniques making simulated earthquake motion which is more appropriate and rational from engineering viewpoint, the research was carried out, and the results are summarized in this paper. The techniques for making simulated earthquake motion, the response of buildings and the response spectra of floors are described. (Kako, I.)

  4. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  5. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  6. Simulating Earthquakes for Science and Society: Earthquake Visualizations Ideal for use in Science Communication and Education

    Science.gov (United States)

    de Groot, R.

    2008-12-01

    The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studying earthquakes. These visualizations were initially shared within the scientific community but have recently gained visibility via television news coverage in Southern California. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin & Brick, 2002). For example, The Great Southern California ShakeOut was based on a potential magnitude 7.8 earthquake on the southern San Andreas fault. The visualization created for the ShakeOut was a key scientific and communication tool for the earthquake drill. This presentation will also feature SCEC Virtual Display of Objects visualization software developed by SCEC Undergraduate Studies in Earthquake Information Technology interns. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

  7. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  8. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    Science.gov (United States)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  9. Recognition of underground nuclear explosion and natural earthquake based on neural network

    International Nuclear Information System (INIS)

    Yang Hong; Jia Weimin

    2000-01-01

    Many features are extracted to improve the identified rate and reliability of underground nuclear explosion and natural earthquake. But how to synthesize these characters is the key of pattern recognition. Based on the improved Delta algorithm, features of underground nuclear explosion and natural earthquake are inputted into BP neural network, and friendship functions are constructed to identify the output values. The identified rate is up to 92.0%, which shows that: the way is feasible

  10. A simulation of earthquake induced undrained pore pressure ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Plains, Kandla River and Gulf of Kachch, between .... We consider the role of induced pore pressure ... location of the Bhuj earthquake epicentre as estimated by US Geological Survey. .... war R 2001 Changes in Ocean; GIS @ development 5.

  11. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    Science.gov (United States)

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  12. Earthquake disaster simulation of civil infrastructures from tall buildings to urban areas

    CERN Document Server

    Lu, Xinzheng

    2017-01-01

    Based on more than 12 years of systematic investigation on earthquake disaster simulation of civil infrastructures, this book covers the major research outcomes including a number of novel computational models, high performance computing methods and realistic visualization techniques for tall buildings and urban areas, with particular emphasize on collapse prevention and mitigation in extreme earthquakes, earthquake loss evaluation and seismic resilience. Typical engineering applications to several tallest buildings in the world (e.g., the 632 m tall Shanghai Tower and the 528 m tall Z15 Tower) and selected large cities in China (the Beijing Central Business District, Xi'an City, Taiyuan City and Tangshan City) are also introduced to demonstrate the advantages of the proposed computational models and techniques. The high-fidelity computational model developed in this book has proven to be the only feasible option to date for earthquake-induced collapse simulation of supertall buildings that are higher than 50...

  13. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  14. Earthquake and nuclear explosion location using the global seismic network

    International Nuclear Information System (INIS)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30 0 distances, the largest deviation being around 10 seconds at 13-18 0 . At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations

  15. Earthquake and nuclear explosion location using the global seismic network

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30/sup 0/ distances, the largest deviation being around 10 seconds at 13-18/sup 0/. At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations.

  16. Finite element simulation of earthquake cycle dynamics for continental listric fault system

    Science.gov (United States)

    Wei, T.; Shen, Z. K.

    2017-12-01

    We simulate stress/strain evolution through earthquake cycles for a continental listric fault system using the finite element method. A 2-D lithosphere model is developed, with the upper crust composed of plasto-elastic materials and the lower crust/upper mantle composed of visco-elastic materials respectively. The media is sliced by a listric fault, which is soled into the visco-elastic lower crust at its downdip end. The system is driven laterally by constant tectonic loading. Slip on fault is controlled by rate-state friction. We start with a simple static/dynamic friction law, and drive the system through multiple earthquake cycles. Our preliminary results show that: (a) periodicity of the earthquake cycles is strongly modulated by the static/dynamic friction, with longer period correlated with higher static friction and lower dynamic friction; (b) periodicity of earthquake is a function of fault depth, with less frequent events of greater magnitudes occurring at shallower depth; and (c) rupture on fault cannot release all the tectonic stress in the system, residual stress is accumulated in the hanging wall block at shallow depth close to the fault, which has to be released either by conjugate faulting or inelastic folding. We are in a process of exploring different rheologic structure and friction laws and examining their effects on earthquake behavior and deformation pattern. The results will be applied to specific earthquakes and fault zones such as the 2008 great Wenchuan earthquake on the Longmen Shan fault system.

  17. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  18. Slip reactivation model for the 2011 Mw9 Tohoku earthquake: Dynamic rupture, sea floor displacements and tsunami simulations.

    Science.gov (United States)

    Galvez, P.; Dalguer, L. A.; Rahnema, K.; Bader, M.

    2014-12-01

    The 2011 Mw9 Tohoku earthquake has been recorded with a vast GPS and seismic network given unprecedented chance to seismologists to unveil complex rupture processes in a mega-thrust event. In fact more than one thousand near field strong-motion stations across Japan (K-Net and Kik-Net) revealed complex ground motion patterns attributed to the source effects, allowing to capture detailed information of the rupture process. The seismic stations surrounding the Miyagi regions (MYGH013) show two clear distinct waveforms separated by 40 seconds. This observation is consistent with the kinematic source model obtained from the inversion of strong motion data performed by Lee's et al (2011). In this model two rupture fronts separated by 40 seconds emanate close to the hypocenter and propagate towards the trench. This feature is clearly observed by stacking the slip-rate snapshots on fault points aligned in the EW direction passing through the hypocenter (Gabriel et al, 2012), suggesting slip reactivation during the main event. A repeating slip on large earthquakes may occur due to frictional melting and thermal fluid pressurization effects. Kanamori & Heaton (2002) argued that during faulting of large earthquakes the temperature rises high enough creating melting and further reduction of friction coefficient. We created a 3D dynamic rupture model to reproduce this slip reactivation pattern using SPECFEM3D (Galvez et al, 2014) based on a slip-weakening friction with sudden two sequential stress drops . Our model starts like a M7-8 earthquake breaking dimly the trench, then after 40 seconds a second rupture emerges close to the trench producing additional slip capable to fully break the trench and transforming the earthquake into a megathrust event. The resulting sea floor displacements are in agreement with 1Hz GPS displacements (GEONET). The seismograms agree roughly with seismic records along the coast of Japan.The simulated sea floor displacement reaches 8-10 meters of

  19. Connection with seismic networks and construction of real time earthquake monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2000-12-15

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system.

  20. Aftershocks of the India Republic Day Earthquake: the MAEC/ISTAR Temporary Seismograph Network

    Science.gov (United States)

    Bodin, P.; Horton, S.; Johnston, A.; Patterson, G.; Bollwerk, J.; Rydelek, P.; Steiner, G.; McGoldrick, C.; Budhbhatti, K. P.; Shah, R.; Macwan, N.

    2001-05-01

    The MW=7.7 Republic Day (26 January, 2001) earthquake on the Kachchh in western India initiated a strong sequence of small aftershocks. Seventeen days following the mainshock, we deployed a network of portable digital event recorders as a cooperative project of the Mid America Earthquake Center in the US and the Institute for Scientific and Technological Advanced Research. Our network consisted of 8 event-triggered Kinemetrics K2 seismographs with 6 data channels (3 accelerometer, 3 Mark L-28/3d seismometer) sampled at 200 Hz, and one continuously-recording Guralp CMG40TD broad-band seismometer sampled at 220 Hz. This network was in place for 18 days. Underlying our network deployment was the notion that because of its tectonic and geologic setting the Republic Day earthquake and its aftershocks might have source and/or propagation characteristics common to earthquakes in stable continental plate-interiors rather than those on plate boundaries or within continental mobile belts. Thus, our goals were to provide data that could be used to compare the Republic Day earthquake with other earthquakes. In particular, the objectives of our network deployment were: (1) to characterize the spatial distribution and occurrence rates of aftershocks, (2) to examine source characteristics of the aftershocks (stress-drops, focal mechanisms), (3) to study the effect of deep unconsolidated sediment on wave propagation, and (4) to determine if other faults (notably the Allah Bundh) were simultaneously active. Most of our sites were on Jurassic bedrock, and all were either free-field, or on the floor of light structures built on rock or with a thin soil cover. However, one of our stations was on a section of unconsolidated sediments hundreds of meters thick adjacent to a site that was subjected to shaking-induced sediment liquefaction during the mainshock. The largest aftershock reported by global networks was an MW=5.9 event on January 28, prior to our deployment. The largest

  1. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  2. Application of geostatistical simulation to compile seismotectonic provinces based on earthquake databases (case study: Iran)

    Science.gov (United States)

    Jalali, Mohammad; Ramazi, Hamidreza

    2018-04-01

    This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low

  3. Building Capacity for Earthquake Monitoring: Linking Regional Networks with the Global Community

    Science.gov (United States)

    Willemann, R. J.; Lerner-Lam, A.

    2006-12-01

    Installing or upgrading a seismic monitoring network is often among the mitigation efforts after earthquake disasters, and this is happening in response to the events both in Sumatra during December 2004 and in Pakistan during October 2005. These networks can yield improved hazard assessment, more resilient buildings where they are most needed, and emergency relief directed more quickly to the worst hit areas after the next large earthquake. Several commercial organizations are well prepared for the fleeting opportunity to provide the instruments that comprise a seismic network, including sensors, data loggers, telemetry stations, and the computers and software required for the network center. But seismic monitoring requires more than hardware and software, no matter how advanced. A well-trained staff is required to select appropriate and mutually compatible components, install and maintain telemetered stations, manage and archive data, and perform the analyses that actually yield the intended benefits. Monitoring is more effective when network operators cooperate with a larger community through free and open exchange of data, sharing information about working practices, and international collaboration in research. As an academic consortium, a facility operator and a founding member of the International Federation of Digital Seismographic Networks, IRIS has access to a broad range of expertise with the skills that are required to help design, install, and operate a seismic network and earthquake analysis center, and stimulate the core training for the professional teams required to establish and maintain these facilities. But delivering expertise quickly when and where it is unexpectedly in demand requires advance planning and coordination in order to respond to the needs of organizations that are building a seismic network, either with tight time constraints imposed by the budget cycles of aid agencies following a disastrous earthquake, or as part of more informed

  4. Three dimensional viscoelastic simulation on dynamic evolution of stress field in North China induced by the 1966 Xingtai earthquake

    Science.gov (United States)

    Chen, Lian-Wang; Lu, Yuan-Zhong; Liu, Jie; Guo, Ruo-Mei

    2001-09-01

    Using three dimensional (3D) viscoelastic finite element method (FEM) we study the dynamic evolution pattern of the coseismic change of Coulomb failure stress and postseismic change, on time scale of hundreds years, of rheological effect induced by the M S=7.2 Xingtai earthquake on March 22, 1966. Then, we simulate the coseismic disturbance in stress field in North China and dynamic change rate on one-year scale caused by the Xingtai earthquake and Tangshan earthquake during 15 years from 1966 to 1980. Finally, we discuss the triggering of a strong earthquake to another future strong earthquake.

  5. Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau

    Science.gov (United States)

    Dong, Peiyu; Hu, Caibo; Shi, Yaolin

    2015-04-01

    The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be

  6. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  7. The ordered network structure and prediction summary for M ≥ 7 earthquakes in Xinjiang region of China

    International Nuclear Information System (INIS)

    Men, Ke-Pei; Zhao, Kai

    2014-01-01

    M ≥ 7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a x k (k = 1, 2, 3), 11 ∝ 12 a, 41 ∝ 43 a, 18 ∝ 19 a, and 5 ∝ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019-2020 and 2025-2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  8. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    Science.gov (United States)

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  9. Tsunami Numerical Simulation for Hypothetical Giant or Great Earthquakes along the Izu-Bonin Trench

    Science.gov (United States)

    Harada, T.; Ishibashi, K.; Satake, K.

    2013-12-01

    We performed tsunami numerical simulations from various giant/great fault models along the Izu-Bonin trench in order to see the behavior of tsunamis originated in this region and to examine the recurrence pattern of great interplate earthquakes along the Nankai trough off southwest Japan. As a result, large tsunami heights are expected in the Ryukyu Islands and on the Pacific coasts of Kyushu, Shikoku and western Honshu. The computed large tsunami heights support the hypothesis that the 1605 Keicho Nankai earthquake was not a tsunami earthquake along the Nankai trough but a giant or great earthquake along the Izu-Bonin trench (Ishibashi and Harada, 2013, SSJ Fall Meeting abstract). The Izu-Bonin subduction zone has been regarded as so-called 'Mariana-type subduction zone' where M>7 interplate earthquakes do not occur inherently. However, since several M>7 outer-rise earthquakes have occurred in this region and the largest slip of the 2011 Tohoku earthquake (M9.0) took place on the shallow plate interface where the strain accumulation had considered to be a little, a possibility of M>8.5 earthquakes in this region may not be negligible. The latest M 7.4 outer-rise earthquake off the Bonin Islands on Dec. 22, 2010 produced small tsunamis on the Pacific coast of Japan except for the Tohoku and Hokkaido districts and a zone of abnormal seismic intensity in the Kanto and Tohoku districts. Ishibashi and Harada (2013) proposed a working hypothesis that the 1605 Keicho earthquake which is considered a great tsunami earthquake along the Nankai trough was a giant/great earthquake along the Izu-Bonin trench based on the similarity of the distributions of ground shaking and tsunami of this event and the 2010 Bonin earthquake. In this study, in order to examine the behavior of tsunamis from giant/great earthquakes along the Izu-Bonin trench and check the Ishibashi and Harada's hypothesis, we performed tsunami numerical simulations from fault models along the Izu-Bonin trench

  10. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  11. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  12. Simulation and monitoring tools to protect disaster management facilities against earthquakes

    Science.gov (United States)

    Saito, Taiki

    2017-10-01

    The earthquakes that hit Kumamoto Prefecture in Japan on April 14 and 16, 2016 severely damaged over 180,000 houses, including over 8,000 that were completely destroyed and others that were partially damaged according to the Cabinet Office's report as of November 14, 2016 [1]. Following these earthquakes, other parts of the world have been struck by earthquakes including Italy and New Zealand as well as the central part of Tottori Prefecture in October, where the earthquake-induced collapse of buildings has led to severe damage and casualties. The earthquakes in Kumamoto Prefecture, in fact, damaged various disaster management facilities including Uto City Hall, which significantly hindered the city's evacuation and recovery operations. One of the most crucial issues in times of disaster is securing the functions of disaster management facilities such as city halls, hospitals and fire stations. To address this issue, seismic simulations are conducted on the East and the West buildings of Toyohashi City Hall using the analysis tool developed by the author, STERA_3D, with the data of the ground motion waveform prediction for the Nankai Trough earthquake provided by the Ministry of Land, Infrastructure, Transport and Tourism. As the result, it was found that the buildings have sufficient earthquake resistance. It turned out, however, that the west building is at risk for wall cracks or ceiling panel's collapse while in the east building, people would not be able to stand through the strong quakes of 7 on the seismic intensity scale and cabinets not secured to the floors or walls would fall over. Additionally, three IT strong-motion seismometers were installed in the city hall to continuously monitor vibrations. Every five minutes, the vibration data obtained by the seismometers are sent to the computers in Toyohashi University of Technology via the Internet for the analysis tools to run simulations in the cloud. If an earthquake strikes, it is able to use the results

  13. Data quality of seismic records from the Tohoku, Japan earthquake as recorded across the Albuquerque Seismological Laboratory networks

    Science.gov (United States)

    Ringler, A.T.; Gee, L.S.; Marshall, B.; Hutt, C.R.; Storm, T.

    2012-01-01

    Great earthquakes recorded across modern digital seismographic networks, such as the recent Tohoku, Japan, earthquake on 11 March 2011 (Mw = 9.0), provide unique datasets that ultimately lead to a better understanding of the Earth's structure (e.g., Pesicek et al. 2008) and earthquake sources (e.g., Ammon et al. 2011). For network operators, such events provide the opportunity to look at the performance across their entire network using a single event, as the ground motion records from the event will be well above every station's noise floor.

  14. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  15. ConvNetQuake: Convolutional Neural Network for Earthquake Detection and Location

    Science.gov (United States)

    Denolle, M.; Perol, T.; Gharbi, M.

    2017-12-01

    Over the last decades, the volume of seismic data has increased exponentially, creating a need for efficient algorithms to reliably detect and locate earthquakes. Today's most elaborate methods scan through the plethora of continuous seismic records, searching for repeating seismic signals. In this work, we leverage the recent advances in artificial intelligence and present ConvNetQuake, a highly scalable convolutional neural network for probabilistic earthquake detection and location from single stations. We apply our technique to study two years of induced seismicity in Oklahoma (USA). We detect 20 times more earthquakes than previously cataloged by the Oklahoma Geological Survey. Our algorithm detection performances are at least one order of magnitude faster than other established methods.

  16. Simulations of biopolymer networks under shear

    NARCIS (Netherlands)

    Huisman, Elisabeth Margaretha

    2011-01-01

    In this thesis we present a new method to simulate realistic three-dimensional networks of biopolymers under shear. These biopolymer networks are important for the structural functions of cells and tissues. We use the method to analyze these networks under shear, and consider the elastic modulus,

  17. Signal Processing and Neural Network Simulator

    Science.gov (United States)

    Tebbe, Dennis L.; Billhartz, Thomas J.; Doner, John R.; Kraft, Timothy T.

    1995-04-01

    The signal processing and neural network simulator (SPANNS) is a digital signal processing simulator with the capability to invoke neural networks into signal processing chains. This is a generic tool which will greatly facilitate the design and simulation of systems with embedded neural networks. The SPANNS is based on the Signal Processing WorkSystemTM (SPWTM), a commercial-off-the-shelf signal processing simulator. SPW provides a block diagram approach to constructing signal processing simulations. Neural network paradigms implemented in the SPANNS include Backpropagation, Kohonen Feature Map, Outstar, Fully Recurrent, Adaptive Resonance Theory 1, 2, & 3, and Brain State in a Box. The SPANNS was developed by integrating SAIC's Industrial Strength Neural Networks (ISNN) Software into SPW.

  18. DEVELOPMENT OF USER-FRIENDLY SIMULATION SYSTEM OF EARTHQUAKE INDUCED URBAN SPREADING FIRE

    Science.gov (United States)

    Tsujihara, Osamu; Gawa, Hidemi; Hayashi, Hirofumi

    In the simulation of earthquake induced urban spreading fire, the produce of the analytical model of the target area is required as well as the analysis of spreading fire and the presentati on of the results. In order to promote the use of the simulation, it is important that the simulation system is non-intrusive and the analysis results can be demonstrated by the realistic presentation. In this study, the simulation system is developed based on the Petri-net algorithm, in which the easy operation can be realized in the modeling of the target area of the simulation through the presentation of analytical results by realistic 3-D animation.

  19. Stochastic strong ground motion simulations for the intermediate-depth earthquakes of the south Aegean subduction zone

    Science.gov (United States)

    Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis

    2015-04-01

    We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the

  20. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    Energy Technology Data Exchange (ETDEWEB)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com [Master Program of Geophysical Engineering, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia); Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id [Global Geophysical Research Group, Faculty of Mining and Petroleum Engineering, Institut Teknologi Bandung, Jalan Ganesha No.10, Bandung 40132 (Indonesia)

    2015-04-24

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment. We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.

  1. Hybrid Broadband Ground-Motion Simulation Using Scenario Earthquakes for the Istanbul Area

    KAUST Repository

    Reshi, Owais A.

    2016-04-13

    Seismic design, analysis and retrofitting of structures demand an intensive assessment of potential ground motions in seismically active regions. Peak ground motions and frequency content of seismic excitations effectively influence the behavior of structures. In regions of sparse ground motion records, ground-motion simulations provide the synthetic seismic records, which not only provide insight into the mechanisms of earthquakes but also help in improving some aspects of earthquake engineering. Broadband ground-motion simulation methods typically utilize physics-based modeling of source and path effects at low frequencies coupled with high frequency semi-stochastic methods. I apply the hybrid simulation method by Mai et al. (2010) to model several scenario earthquakes in the Marmara Sea, an area of high seismic hazard. Simulated ground motions were generated at 75 stations using systematically calibrated model parameters. The region-specific source, path and site model parameters were calibrated by simulating a w4.1 Marmara Sea earthquake that occurred on November 16, 2015 on the fault segment in the vicinity of Istanbul. The calibrated parameters were then used to simulate the scenario earthquakes with magnitudes w6.0, w6.25, w6.5 and w6.75 over the Marmara Sea fault. Effects of fault geometry, hypocenter location, slip distribution and rupture propagation were thoroughly studied to understand variability in ground motions. A rigorous analysis of waveforms reveal that these parameters are critical for determining the behavior of ground motions especially in the near-field. Comparison of simulated ground motion intensities with ground-motion prediction quations indicates the need of development of the region-specific ground-motion prediction equation for Istanbul area. Peak ground motion maps are presented to illustrate the shaking in the Istanbul area due to the scenario earthquakes. The southern part of Istanbul including Princes Islands show high amplitudes

  2. Splitting Strategy for Simulating Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Xiong You

    2014-01-01

    Full Text Available The splitting approach is developed for the numerical simulation of genetic regulatory networks with a stable steady-state structure. The numerical results of the simulation of a one-gene network, a two-gene network, and a p53-mdm2 network show that the new splitting methods constructed in this paper are remarkably more effective and more suitable for long-term computation with large steps than the traditional general-purpose Runge-Kutta methods. The new methods have no restriction on the choice of stepsize due to their infinitely large stability regions.

  3. Numerical simulation of faulting in the Sunda Trench shows that seamounts may generate megathrust earthquakes

    Science.gov (United States)

    Jiao, L.; Chan, C. H.; Tapponnier, P.

    2017-12-01

    The role of seamounts in generating earthquakes has been debated, with some studies suggesting that seamounts could be truncated to generate megathrust events, while other studies indicate that the maximum size of megathrust earthquakes could be reduced as subducting seamounts could lead to segmentation. The debate is highly relevant for the seamounts discovered along the Mentawai patch of the Sunda Trench, where previous studies have suggested that a megathrust earthquake will likely occur within decades. In order to model the dynamic behavior of the Mentawai patch, we simulated forearc faulting caused by seamount subducting using the Discrete Element Method. Our models show that rupture behavior in the subduction system is dominated by stiffness of the overriding plate. When stiffness is low, a seamount can be a barrier to rupture propagation, resulting in several smaller (M≤8.0) events. If, however, stiffness is high, a seamount can cause a megathrust earthquake (M8 class). In addition, we show that a splay fault in the subduction environment could only develop when a seamount is present, and a larger offset along a splay fault is expected when stiffness of the overriding plate is higher. Our dynamic models are not only consistent with previous findings from seismic profiles and earthquake activities, but the models also better constrain the rupture behavior of the Mentawai patch, thus contributing to subsequent seismic hazard assessment.

  4. The 2008 West Bohemia earthquake swarm in the light of the WEBNET network

    Czech Academy of Sciences Publication Activity Database

    Fischer, T.; Horálek, Josef; Michálek, Jan; Boušková, Alena

    2010-01-01

    Roč. 14, č. 4 (2010), s. 665-682 ISSN 1383-4649 Grant - others:GA MŠk(CZ) specifický-výzkum; Norway Grants(NO) A/CZ0046/2/0015 Institutional research plan: CEZ:AV0Z30120515 Keywords : earthquake swarm * seismic network * seismicity Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.274, year: 2010

  5. Incorporating Low-Cost Seismometers into the Central Weather Bureau Seismic Network for Earthquake Early Warning in Taiwan

    Directory of Open Access Journals (Sweden)

    Da-Yi Chen

    2015-01-01

    Full Text Available A dense seismic network can increase Earthquake Early Warning (EEW system capability to estimate earthquake information with higher accuracy. It is also critical for generating fast, robust earthquake alarms before strong-ground shaking hits the target area. However, building a dense seismic network via traditional seismometers is too expensive and may not be practical. Using low-cost Micro-Electro Mechanical System (MEMS accelerometers is a potential solution to quickly deploy a large number of sensors around the monitored region. An EEW system constructed using a dense seismic network with 543 MEMS sensors in Taiwan is presented. The system also incorporates the official seismic network of _ Central Weather Bureau (CWB. The real-time data streams generated by the two networks are integrated using the Earthworm software. This paper illustrates the methods used by the integrated system for estimating earthquake information and evaluates the system performance. We applied the Earthworm picker for the seismograms recorded by the MEMS sensors (Chen et al. 2015 following new picking constraints to accurately detect P-wave arrivals and use a new regression equation for estimating earthquake magnitudes. An off-line test was implemented using 46 earthquakes with magnitudes ranging from ML 4.5 - 6.5 to calibrate the system. The experimental results show that the integrated system has stable source parameter results and issues alarms much faster than the current system run by the CWB seismic network (CWBSN.

  6. The Great Maule earthquake: seismicity prior to and after the main shock from amphibious seismic networks

    Science.gov (United States)

    Lieser, K.; Arroyo, I. G.; Grevemeyer, I.; Flueh, E. R.; Lange, D.; Tilmann, F. J.

    2013-12-01

    The Chilean subduction zone is among the seismically most active plate boundaries in the world and its coastal ranges suffer from a magnitude 8 or larger megathrust earthquake every 10-20 years. The Constitución-Concepción or Maule segment in central Chile between ~35.5°S and 37°S was considered to be a mature seismic gap, rupturing last in 1835 and being seismically quiet without any magnitude 4.5 or larger earthquakes reported in global catalogues. It is located to the north of the nucleation area of the 1960 magnitude 9.5 Valdivia earthquake and to the south of the 1928 magnitude 8 Talca earthquake. On 27 February 2010 this segment ruptured in a Mw=8.8 earthquake, nucleating near 36°S and affecting a 500-600 km long segment of the margin between 34°S and 38.5°S. Aftershocks occurred along a roughly 600 km long portion of the central Chilean margin, most of them offshore. Therefore, a network of 30 ocean-bottom-seismometers was deployed in the northern portion of the rupture area for a three month period, recording local offshore aftershocks between 20 September 2010 and 25 December 2010. In addition, data of a network consisting of 33 landstations of the GeoForschungsZentrum Potsdam were included into the network, providing an ideal coverage of both the rupture plane and areas affected by post-seismic slip as deduced from geodetic data. Aftershock locations are based on automatically detected P wave onsets and a 2.5D velocity model of the combined on- and offshore network. Aftershock seismicity analysis in the northern part of the survey area reveals a well resolved seismically active splay fault in the accretionary prism of the Chilean forearc. Our findings imply that in the northernmost part of the rupture zone, co-seismic slip most likely propagated along the splay fault and not the subduction thrust fault. In addition, the updip limit of aftershocks along the plate interface can be verified to about 40 km landwards from the deformation front. Prior to

  7. Effects of earthquake rupture shallowness and local soil conditions on simulated ground motions

    International Nuclear Information System (INIS)

    Apsel, Randy J.; Hadley, David M.; Hart, Robert S.

    1983-03-01

    The paucity of strong ground motion data in the Eastern U.S. (EUS), combined with well recognized differences in earthquake source depths and wave propagation characteristics between Eastern and Western U.S. (WUS) suggests that simulation studies will play a key role in assessing earthquake hazard in the East. This report summarizes an extensive simulation study of 5460 components of ground motion representing a model parameter study for magnitude, distance, source orientation, source depth and near-surface site conditions for a generic EUS crustal model. The simulation methodology represents a hybrid approach to modeling strong ground motion. Wave propagation is modeled with an efficient frequency-wavenumber integration algorithm. The source time function used for each grid element of a modeled fault is empirical, scaled from near-field accelerograms. This study finds that each model parameter has a significant influence on both the shape and amplitude of the simulated response spectra. The combined effect of all parameters predicts a dispersion of response spectral values that is consistent with strong ground motion observations. This study provides guidelines for scaling WUS data from shallow earthquakes to the source depth conditions more typical in the EUS. The modeled site conditions range from very soft soil to hard rock. To the extent that these general site conditions model a specific site, the simulated response spectral information can be used to either correct spectra to a site-specific environment or used to compare expected ground motions at different sites. (author)

  8. Network Modeling and Simulation A Practical Perspective

    CERN Document Server

    Guizani, Mohsen; Khan, Bilal

    2010-01-01

    Network Modeling and Simulation is a practical guide to using modeling and simulation to solve real-life problems. The authors give a comprehensive exposition of the core concepts in modeling and simulation, and then systematically address the many practical considerations faced by developers in modeling complex large-scale systems. The authors provide examples from computer and telecommunication networks and use these to illustrate the process of mapping generic simulation concepts to domain-specific problems in different industries and disciplines. Key features: Provides the tools and strate

  9. Numerical simulation of the 1976 Ms7.8 Tangshan Earthquake

    Science.gov (United States)

    Li, Zhengbo; Chen, Xiaofei

    2017-04-01

    An Ms 7.8 earthquake happened in Tangshan in 1976, causing more than 240000 people death and almost destroying the whole city. Numerous studies indicated that the surface rupture zone extends 8 to 11 km in the south of Tangshan City. The fault system is composed with more than ten NE-trending right-lateral strike-slip left-stepping echelon faults, with a general strike direction of N30°E. However, recent scholars proposed that the surface ruptures appeared in a larger area. To simulate the rupture process closer to the real situation, the curvilinear grid finite difference method presented by Zhang et al. (2006, 2014) which can handle the free surface and the complex geometry were implemented to investigate the dynamic rupture and ground motion of Tangshan earthquake. With the data from field survey, seismic section, borehole and trenching results given by different studies, several fault geometry models were established. The intensity, the seismic waveform and the displacement resulted from the simulation of different models were compared with the observed data. The comparison of these models shows details of the rupture process of the Tangshan earthquake and implies super-shear may occur during the rupture, which is important for better understanding of this complicated rupture process and seismic hazard distributions of this earthquake.

  10. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    contraceptive innovations in the Cameroon. He found that real-world adoption rates did not follow simulation models when the network relationships were...Analysis of the Coevolution of Adolescents ’ Friendship Networks, Taste in Music, and Alcohol Consumption. Methodology, 2: 48-56. Tichy, N.M., Tushman

  11. The Quake-Catcher Network: Improving Earthquake Strong Motion Observations Through Community Engagement

    Science.gov (United States)

    Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.; Chung, A. I.; Neighbors, C.; Saltzman, J.

    2010-12-01

    The Quake-Catcher Network (QCN) involves the community in strong motion data collection by utilizing volunteer computing techniques and low-cost MEMS accelerometers. Volunteer computing provides a mechanism to expand strong-motion seismology with minimal infrastructure costs, while promoting community participation in science. Micro-Electro-Mechanical Systems (MEMS) triaxial accelerometers can be attached to a desktop computer via USB and are internal to many laptops. Preliminary shake table tests show the MEMS accelerometers can record high-quality seismic data with instrument response similar to research-grade strong-motion sensors. QCN began distributing sensors and software to K-12 schools and the general public in April 2008 and has grown to roughly 1500 stations worldwide. We also recently tested whether sensors could be quickly deployed as part of a Rapid Aftershock Mobilization Program (RAMP) following the 2010 M8.8 Maule, Chile earthquake. Volunteers are recruited through media reports, web-based sensor request forms, as well as social networking sites. Using data collected to date, we examine whether a distributed sensing network can provide valuable seismic data for earthquake detection and characterization while promoting community participation in earthquake science. We utilize client-side triggering algorithms to determine when significant ground shaking occurs and this metadata is sent to the main QCN server. On average, trigger metadata are received within 1-10 seconds from the observation of a trigger; the larger data latencies are correlated with greater server-station distances. When triggers are detected, we determine if the triggers correlate to others in the network using spatial and temporal clustering of incoming trigger information. If a minimum number of triggers are detected then a QCN-event is declared and an initial earthquake location and magnitude is estimated. Initial analysis suggests that the estimated locations and magnitudes are

  12. Stochastic Simulation of Biomolecular Reaction Networks Using the Biomolecular Network Simulator Software

    National Research Council Canada - National Science Library

    Frazier, John; Chusak, Yaroslav; Foy, Brent

    2008-01-01

    .... The software uses either exact or approximate stochastic simulation algorithms for generating Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks...

  13. The Central and Eastern European Earthquake Research Network - CE3RN

    Science.gov (United States)

    Bragato, Pier Luigi; Costa, Giovanni; Gallo, Antonella; Gosar, Andrej; Horn, Nikolaus; Lenhardt, Wolfgang; Mucciarelli, Marco; Pesaresi, Damiano; Steiner, Rudolf; Suhadolc, Peter; Tiberi, Lara; Živčić, Mladen; Zoppé, Giuliana

    2014-05-01

    The region of the Central and Eastern Europe is an area characterised by a relatively high seismicity. The active seismogenic structures and the related potentially destructive events are located in the proximity of the political boundaries between several countries existing in the area. An example is the seismic region between the NE Italy (FVG, Trentino-Alto Adige and Veneto), Austria (Tyrol, Carinthia) and Slovenia. So when a destructive earthquake occurs in the area, all the three countries are involved. In the year 2001 the Agencija Republike Slovenije za Okolje (ARSO) in Slovenia, the Department of Mathematics and Geoscience of the University of Trieste (DMG), the OGS (Istituto Nazionale di Oceanografia e di Geofisica Sperimentale) in Italy and the Zentralanstalt für Meteorologie und Geodynamik (ZAMG) in Austria signed an agreement for the real-time seismological data exchange in the Southeastern Alps region. Soon after the Interreg IIIa Italia-Austria projects "Trans-National Seismological Networks in the South-Eastern Alps" and "FASTLINK" started. The main goal of these projects was the creation of a transfrontier network for the common seismic monitoring of the region for scientific and civil defense purposes. During these years the high quality data recorded by the transfrontier network has been used, by the involved institutions, for their scientific research, for institutional activities and for the civil defense services. Several common international projects have been realized with success. The instrumentation has been continuously upgraded, the installations quality improved as well as the data transmission efficiency. In the 2013 ARSO, DMG, OGS and ZAMG decided to name the cooperative network "Central and Eastern European Earthquake Research Network - CE3RN". The national/regional seismic networks actually involved in the CE3RN network are: • Austrian national BB network (ZAMG - OE) • Friuli Veneto SP network (OGS - FV) • Friuli VG

  14. Network simulations of optical illusions

    Science.gov (United States)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  15. Long-period ground motions at near-regional distances caused by the PL wave from, inland earthquakes: Observation and numerical simulation of the 2004 Mid-Niigata, Japan, Mw6.6 earthquake

    Science.gov (United States)

    Furumura, T.; Kennett, B. L. N.

    2017-12-01

    We examine the development of large, long-period ground motions at near-regional distances (D=50-200 km) generated by the PL wave from large, shallow inland earthquakes, based on the analysis of strong motion records and finite-difference method (FDM) simulations of seismic wave propagation. PL wave can be represented as leaking modes of the crustal waveguide and are commonly observed at regional distances between 300 to 1000 km as a dispersed, long-period signal with a dominant period of about 20 s. However, observations of recent earthquakes at the dense K-NET and KiK-net strong motion networks in Japan demonstrate the dominance of the PL wave at near-regional (D=50-200 km) distances as, e.g., for the 2004 Mid Niigata, Japan, earthquake (Mw6.6; h=13 km). The observed PL wave signal between P and S wave shows a large, dispersed wave packet with dominant period of about T=4-10 s with amplitude almost comparable to or larger than the later arrival of the S and surface waves. Thus, the early arrivals of the long-period PL wave immediately after P wave can enhance resonance with large-scale constructions such as high-rise buildings and large oil-storage tanks etc. with potential for disaster. Such strong effects often occurred during the 2004 Mid Niigata earthquakes and other large earthquakes which occurred nearby the Kanto (Tokyo) basin. FDM simulation of seismic wave propagation employing realistic 3-D sedimentary structure models demonstrates the process by which the PL wave develops at near-regional distances from shallow, crustal earthquakes by constructive interference of the P wave in the long-period band. The amplitude of the PL wave is very sensitive to low-velocity structure in the near-surface. Lowered velocities help to develop large SV-to-P conversion and weaken the P-to-SV conversion at the free surface. Both effects enhance the multiple P reflections in the crustal waveguide and prevent the leakage of seismic energy into the mantle. However, a very

  16. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub......-algorithm uses a construction algorithm to determine edges and route the demand. Performance for different versions of the algorithm are reported in terms of runtime and quality of the solutions. The algorithm is able to find solutions of reasonable quality in approximately 1 hour for networks with 100 nodes....

  17. Earthquake location determination using data from DOMERAPI and BMKG seismic networks: A preliminary result of DOMERAPI project

    Energy Technology Data Exchange (ETDEWEB)

    Ramdhan, Mohamad [Study Program of Earth Science, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Agency for Meteorology, Climatology and Geophysics of Indonesia (BMKG) Jl. Angkasa 1 No. 2 Kemayoran, Jakarta Pusat, 10720 (Indonesia); Nugraha, Andri Dian; Widiyantoro, Sri [Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institut TeknologiBandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Métaxian, Jean-Philippe [Institut de Recherche pour le Développement (IRD) (France); Valencia, Ayunda Aulia, E-mail: mohamad.ramdhan@bmkg.go.id [Study Program of Geophysical Engineering, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia)

    2015-04-24

    DOMERAPI project has been conducted to comprehensively study the internal structure of Merapi volcano, especially about deep structural features beneath the volcano. DOMERAPI earthquake monitoring network consists of 46 broad-band seismometers installed around the Merapi volcano. Earthquake hypocenter determination is a very important step for further studies, such as hypocenter relocation and seismic tomographic imaging. Ray paths from earthquake events occurring outside the Merapi region can be utilized to delineate the deep magma structure. Earthquakes occurring outside the DOMERAPI seismic network will produce an azimuthal gap greater than 180{sup 0}. Owing to this situation the stations from BMKG seismic network can be used jointly to minimize the azimuthal gap. We identified earthquake events manually and carefully, and then picked arrival times of P and S waves. The data from the DOMERAPI seismic network were combined with the BMKG data catalogue to determine earthquake events outside the Merapi region. For future work, we will also use the BPPTKG (Center for Research and Development of Geological Disaster Technology) data catalogue in order to study shallow structures beneath the Merapi volcano. The application of all data catalogues will provide good information as input for further advanced studies and volcano hazards mitigation.

  18. Comparison of SISEC code simulations with earthquake data of ordinary and base-isolated buildings

    International Nuclear Information System (INIS)

    Wang, C.Y.; Gvildys, J.

    1991-01-01

    At Argonne National Laboratory (ANL), a 3-D computer program SISEC (Seismic Isolation System Evaluation Code) is being developed for simulating the system response of isolated and ordinary structures (Wang et al. 1991). This paper describes comparison of SISEC code simulations with building response data of actual earthquakes. To ensure the accuracy of analytical simulations, recorded data of full-size reinforced concrete structures located in Sendai, Japan are used in this benchmark comparison. The test structures consist of two three-story buildings, one base-isolated and the other one ordinary founded. They were constructed side by side to investigate the effect of base isolation on the acceleration response. Among 20 earthquakes observed since April 1989, complete records of three representative earthquakes, no.2, no.6, and no.17, are used for the code validation presented in this paper. Correlations of observed and calculated accelerations at all instrument locations are made. Also, relative response characteristics of ordinary and isolated building structures are investigated. (J.P.N.)

  19. Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation

    Directory of Open Access Journals (Sweden)

    Ming-Da Chiou

    2013-01-01

    Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.

  20. Network Simulation of Technical Architecture

    National Research Council Canada - National Science Library

    Cave, William

    1998-01-01

    ..., and development of the Army Battle Command System (ABCS). PSI delivered a hierarchical iconic modeling facility that can be used to structure and restructure both models and scenarios, interactively, while simulations are running...

  1. Remotely Triggered Earthquakes Recorded by EarthScope's Transportable Array and Regional Seismic Networks: A Case Study Of Four Large Earthquakes

    Science.gov (United States)

    Velasco, A. A.; Cerda, I.; Linville, L.; Kilb, D. L.; Pankow, K. L.

    2013-05-01

    Changes in field stress required to trigger earthquakes have been classified in two basic ways: static and dynamic triggering. Static triggering occurs when an earthquake that releases accumulated strain along a fault stress loads a nearby fault. Dynamic triggering occurs when an earthquake is induced by the passing of seismic waves from a large mainshock located at least two or more fault lengths from the epicenter of the main shock. We investigate details of dynamic triggering using data collected from EarthScope's USArray and regional seismic networks located in the United States. Triggered events are identified using an optimized automated detector based on the ratio of short term to long term average (Antelope software). Following the automated processing, the flagged waveforms are individually analyzed, in both the time and frequency domains, to determine if the increased detection rates correspond to local earthquakes (i.e., potentially remotely triggered aftershocks). Here, we show results using this automated schema applied to data from four large, but characteristically different, earthquakes -- Chile (Mw 8.8 2010), Tokoku-Oki (Mw 9.0 2011), Baja California (Mw 7.2 2010) and Wells Nevada (Mw 6.0 2008). For each of our four mainshocks, the number of detections within the 10 hour time windows span a large range (1 to over 200) and statistically >20% of the waveforms show evidence of anomalous signals following the mainshock. The results will help provide for a better understanding of the physical mechanisms involved in dynamic earthquake triggering and will help identify zones in the continental U.S. that may be more susceptible to dynamic earthquake triggering.

  2. Earthquake source imaging by high-resolution array analysis at regional distances: the 2010 M7 Haiti earthquake as seen by the Venezuela National Seismic Network

    Science.gov (United States)

    Meng, L.; Ampuero, J. P.; Rendon, H.

    2010-12-01

    Back projection of teleseismic waves based on array processing has become a popular technique for earthquake source imaging,in particular to track the areas of the source that generate the strongest high frequency radiation. The technique has been previously applied to study the rupture process of the Sumatra earthquake and the supershear rupture of the Kunlun earthquakes. Here we attempt to image the Haiti earthquake using the data recorded by Venezuela National Seismic Network (VNSN). The network is composed of 22 broad-band stations with an East-West oriented geometry, and is located approximately 10 degrees away from Haiti in the perpendicular direction to the Enriquillo fault strike. This is the first opportunity to exploit the privileged position of the VNSN to study large earthquake ruptures in the Caribbean region. This is also a great opportunity to explore the back projection scheme of the crustal Pn phase at regional distances,which provides unique complementary insights to the teleseismic source inversions. The challenge in the analysis of the 2010 M7.0 Haiti earthquake is its very compact source region, possibly shorter than 30km, which is below the resolution limit of standard back projection techniques based on beamforming. Results of back projection analysis using the teleseismic USarray data reveal little details of the rupture process. To overcome the classical resolution limit we explored the Multiple Signal Classification method (MUSIC), a high-resolution array processing technique based on the signal-noise orthognality in the eigen space of the data covariance, which achieves both enhanced resolution and better ability to resolve closely spaced sources. We experiment with various synthetic earthquake scenarios to test the resolution. We find that MUSIC provides at least 3 times higher resolution than beamforming. We also study the inherent bias due to the interferences of coherent Green’s functions, which leads to a potential quantification

  3. Reprocessing process simulation network; PRONET

    International Nuclear Information System (INIS)

    Mitsui, T.; Takada, H.; Kamishima, N.; Tsukamoto, T.; Harada, N.; Fujita, N.; Gonda, K.

    1991-01-01

    The effectiveness of simulation technology and its wide application to nuclear fuel reprocessing plants has been recognized recently. The principal aim of applying simulation is to predict the process behavior accurately based on the quantitative relations among substances in physical and chemical phenomena. Mitsubishi Heavy Industries Ltd. has engaged positively in the development and the application study of this technology. All the software products of its recent activities were summarized in the integrated form named 'PRONET'. The PRONET is classified into two independent software groups from the viewpoint of computer system. One is off-line Process Simulation Group, and the other is Dynamic Real-time Simulator Group. The former is called 'PRONET System', and the latter is called 'PRONET Simulator'. These have several subsystems with the prefix 'MR' meaning Mitsubishi Reprocessing Plant. Each MR subsystem is explained in this report. The technical background, the objective of the PRONET, the system and the function of the PRONET, and the future application to an on-line real-time simulator and the development of MR EXPERT are described. (K.I.)

  4. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  5. Probabilistic neural network algorithm for using radon emanations as an earthquake precursor

    International Nuclear Information System (INIS)

    Gupta, Dhawal; Shahani, D.T.

    2014-01-01

    The investigation throughout the world in past two decades provides evidence which indicate that significance variation of radon and other soil gases occur in association with major geophysical events such as earthquake. The traditional statistical algorithm includes regression to remove the effect of the meteorological parameters from the raw radon and anomalies are calculated either taking the periodicity in seasonal variations or periodicity computed using Fast Fourier Transform. In case of neural networks the regression step is avoided. A neural network model can be found which can learn the behavior of radon with respect to meteorological parameter in order that changing emission patterns may be adapted to by the model on its own. The output of this neural model is the estimated radon values. This estimated radon value is used to decide whether anomalous behavior of radon has occurred and a valid precursor may be identified. The neural network model developed using Radial Basis function network gave a prediction rate of 87.7%. The same was accompanied by huge false alarms. The present paper deals with improved neural network algorithm using Probabilistic Neural Networks that requires neither an explicit step of regression nor use of any specific period. This neural network model reduces the false alarms to zero and gave same prediction rate as RBF networks. (author)

  6. Simulation of the earthquake-induced collapse of a school building in Turkey in 2011 Van Earthquake

    NARCIS (Netherlands)

    Bal, Ihsan Engin; Smyrou, Eleni

    2016-01-01

    Collapses of school or dormitory buildings experienced in recent earthquakes raise the issue of safety as a major challenge for decision makers. A school building is ‘just another structure’ technically speaking, however, the consequences of a collapse in an earthquake could lead to social reactions

  7. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    Science.gov (United States)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  8. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  9. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  10. ASSESSING URBAN STREETS NETWORK VULNERABILITY AGAINST EARTHQUAKE USING GIS – CASE STUDY: 6TH ZONE OF TEHRAN

    OpenAIRE

    A. Rastegar

    2017-01-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population ...

  11. Kinematic Earthquake Ground‐Motion Simulations on Listric Normal Faults

    KAUST Repository

    Passone, Luca

    2017-11-28

    Complex finite-faulting source processes have important consequences for near-source ground motions, but empirical ground-motion prediction equations still lack near-source data and hence cannot fully capture near-fault shaking effects. Using a simulation-based approach, we study the effects of specific source parameterizations on near-field ground motions where empirical data are limited. Here, we investigate the effects of fault listricity through near-field kinematic ground-motion simulations. Listric faults are defined as curved faults in which dip decreases with depth, resulting in a concave upward profile. The listric profiles used in this article are built by applying a specific shape function and varying the initial dip and the degree of listricity. Furthermore, we consider variable rupture speed and slip distribution to generate ensembles of kinematic source models. These ensembles are then used in a generalized 3D finite-difference method to compute synthetic seismograms; the corresponding shaking levels are then compared in terms of peak ground velocities (PGVs) to quantify the effects of breaking fault planarity. Our results show two general features: (1) as listricity increases, the PGVs decrease on the footwall and increase on the hanging wall, and (2) constructive interference of seismic waves emanated from the listric fault causes PGVs over two times higher than those observed for the planar fault. Our results are relevant for seismic hazard assessment for near-fault areas for which observations are scarce, such as in the listric Campotosto fault (Italy) located in an active seismic area under a dam.

  12. Kinematic Earthquake Ground‐Motion Simulations on Listric Normal Faults

    KAUST Repository

    Passone, Luca; Mai, Paul Martin

    2017-01-01

    Complex finite-faulting source processes have important consequences for near-source ground motions, but empirical ground-motion prediction equations still lack near-source data and hence cannot fully capture near-fault shaking effects. Using a simulation-based approach, we study the effects of specific source parameterizations on near-field ground motions where empirical data are limited. Here, we investigate the effects of fault listricity through near-field kinematic ground-motion simulations. Listric faults are defined as curved faults in which dip decreases with depth, resulting in a concave upward profile. The listric profiles used in this article are built by applying a specific shape function and varying the initial dip and the degree of listricity. Furthermore, we consider variable rupture speed and slip distribution to generate ensembles of kinematic source models. These ensembles are then used in a generalized 3D finite-difference method to compute synthetic seismograms; the corresponding shaking levels are then compared in terms of peak ground velocities (PGVs) to quantify the effects of breaking fault planarity. Our results show two general features: (1) as listricity increases, the PGVs decrease on the footwall and increase on the hanging wall, and (2) constructive interference of seismic waves emanated from the listric fault causes PGVs over two times higher than those observed for the planar fault. Our results are relevant for seismic hazard assessment for near-fault areas for which observations are scarce, such as in the listric Campotosto fault (Italy) located in an active seismic area under a dam.

  13. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  14. The Airport Network Flow Simulator.

    Science.gov (United States)

    1976-05-01

    The impact of investment at an individual airport is felt through-out the National Airport System by reduction of delays at other airports in the the system. A GPSS model was constructed to simulate the propagation of delays through a nine-airport sy...

  15. Swedish National Seismic Network (SNSN). A short report on recorded earthquakes during the fourth quarter of the year 2010

    Energy Technology Data Exchange (ETDEWEB)

    Boedvarsson, Reynir (Uppsala Univ. (Sweden), Dept. of Earth Sciences)

    2011-01-15

    According to an agreement with Swedish Nuclear Fuel and Waste Management Company (SKB) and Uppsala Univ., the Dept. of Earth Sciences has continued to carry out observations of seismic events at seismic stations within the Swedish National Seismic Network (SNSN). This short report gives brief information about the recorded seismicity during October through December 2010. The Swedish National Seismic Network consists of 62 stations. During October through December, 2,241 events were located whereof 158 are estimated as real earthquakes, 1,457 are estimated as explosions, 444 are induced earthquakes in the vicinity of the mines in Kiruna and Malmberget and 182 events are still considered as uncertain but these are most likely explosions and are mainly located outside the network. One earthquake had a magnitude above M{sub L} = 2.0 during the period. In November one earthquake was located 13 km SW of Haernoesand with a magnitude of M{sub L} = 2.1. The largest earthquake in October had a magnitude of M{sub L} = 1.7 and was located 12 km NE of Eksjoe and in December an earthquake with a magnitude of M{sub L} = 1.8 was located 19 km north of Motala

  16. Simulating subduction zone earthquakes using discrete element method: a window into elusive source processes

    Science.gov (United States)

    Blank, D. G.; Morgan, J.

    2017-12-01

    Large earthquakes that occur on convergent plate margin interfaces have the potential to cause widespread damage and loss of life. Recent observations reveal that a wide range of different slip behaviors take place along these megathrust faults, which demonstrate both their complexity, and our limited understanding of fault processes and their controls. Numerical modeling provides us with a useful tool that we can use to simulate earthquakes and related slip events, and to make direct observations and correlations among properties and parameters that might control them. Further analysis of these phenomena can lead to a more complete understanding of the underlying mechanisms that accompany the nucleation of large earthquakes, and what might trigger them. In this study, we use the discrete element method (DEM) to create numerical analogs to subduction megathrusts with heterogeneous fault friction. Displacement boundary conditions are applied in order to simulate tectonic loading, which in turn, induces slip along the fault. A wide range of slip behaviors are observed, ranging from creep to stick slip. We are able to characterize slip events by duration, stress drop, rupture area, and slip magnitude, and to correlate the relationships among these quantities. These characterizations allow us to develop a catalog of rupture events both spatially and temporally, for comparison with slip processes on natural faults.

  17. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  18. Rainfall and earthquake-induced landslide susceptibility assessment using GIS and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Y. Li

    2012-08-01

    Full Text Available A GIS-based method for the assessment of landslide susceptibility in a selected area of Qingchuan County in China is proposed by using the back-propagation Artificial Neural Network model (ANN. Landslide inventory was derived from field investigation and aerial photo interpretation. 473 landslides occurred before the Wenchuan earthquake (which were thought as rainfall-induced landslides (RIL in this study, and 885 earthquake-induced landslides (EIL were recorded into the landslide inventory map. To understand the different impacts of rainfall and earthquake on landslide occurrence, we first compared the variations between landslide spatial distribution and conditioning factors. Then, we compared the weight variation of each conditioning factor derived by adjusting ANN structure and factors combination respectively. Last, the weight of each factor derived from the best prediction model was applied to the entire study area to produce landslide susceptibility maps.

    Results show that slope gradient has the highest weight for landslide susceptibility mapping for both RIL and EIL. The RIL model built with four different factors (slope gradient, elevation, slope height and distance to the stream shows the best success rate of 93%; the EIL model built with five different factors (slope gradient, elevation, slope height, distance to the stream and distance to the fault has the best success rate of 98%. Furthermore, the EIL data was used to verify the RIL model and the success rate is 92%; the RIL data was used to verify the EIL model and the success rate is 53%.

  19. The Extraction of Post-Earthquake Building Damage Informatiom Based on Convolutional Neural Network

    Science.gov (United States)

    Chen, M.; Wang, X.; Dou, A.; Wu, X.

    2018-04-01

    The seismic damage information of buildings extracted from remote sensing (RS) imagery is meaningful for supporting relief and effective reduction of losses caused by earthquake. Both traditional pixel-based and object-oriented methods have some shortcoming in extracting information of object. Pixel-based method can't make fully use of contextual information of objects. Object-oriented method faces problem that segmentation of image is not ideal, and the choice of feature space is difficult. In this paper, a new stratage is proposed which combines Convolution Neural Network (CNN) with imagery segmentation to extract building damage information from remote sensing imagery. the key idea of this method includes two steps. First to use CNN to predicate the probability of each pixel and then integrate the probability within each segmentation spot. The method is tested through extracting the collapsed building and uncollapsed building from the aerial image which is acquired in Longtoushan Town after Ms 6.5 Ludian County, Yunnan Province earthquake. The results show that the proposed method indicates its effectiveness in extracting damage information of buildings after earthquake.

  20. Neural network based tomographic approach to detect earthquake-related ionospheric anomalies

    Directory of Open Access Journals (Sweden)

    S. Hirooka

    2011-08-01

    Full Text Available A tomographic approach is used to investigate the fine structure of electron density in the ionosphere. In the present paper, the Residual Minimization Training Neural Network (RMTNN method is selected as the ionospheric tomography with which to investigate the detailed structure that may be associated with earthquakes. The 2007 Southern Sumatra earthquake (M = 8.5 was selected because significant decreases in the Total Electron Content (TEC have been confirmed by GPS and global ionosphere map (GIM analyses. The results of the RMTNN approach are consistent with those of TEC approaches. With respect to the analyzed earthquake, we observed significant decreases at heights of 250–400 km, especially at 330 km. However, the height that yields the maximum electron density does not change. In the obtained structures, the regions of decrease are located on the southwest and southeast sides of the Integrated Electron Content (IEC (altitudes in the range of 400–550 km and on the southern side of the IEC (altitudes in the range of 250–400 km. The global tendency is that the decreased region expands to the east with increasing altitude and concentrates in the Southern hemisphere over the epicenter. These results indicate that the RMTNN method is applicable to the estimation of ionospheric electron density.

  1. Insuring against earthquakes: simulating the cost-effectiveness of disaster preparedness.

    Science.gov (United States)

    de Hoop, Thomas; Ruben, Ruerd

    2010-04-01

    Ex-ante measures to improve risk preparedness for natural disasters are generally considered to be more effective than ex-post measures. Nevertheless, most resources are allocated after an event in geographical areas that are vulnerable to natural disasters. This paper analyses the cost-effectiveness of ex-ante adaptation measures in the wake of earthquakes and provides an assessment of the future role of private and public agencies in disaster risk management. The study uses a simulation model approach to evaluate consumption losses after earthquakes under different scenarios of intervention. Particular attention is given to the role of activity diversification measures in enhancing disaster preparedness and the contributions of (targeted) microcredit and education programmes for reconstruction following a disaster. Whereas the former measures are far more cost-effective, missing markets and perverse incentives tend to make ex-post measures a preferred option, thus occasioning underinvestment in ex-ante adaptation initiatives.

  2. GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK

    OpenAIRE

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-01-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the d...

  3. Conditional Probabilities of Large Earthquake Sequences in California from the Physics-based Rupture Simulator RSQSim

    Science.gov (United States)

    Gilchrist, J. J.; Jordan, T. H.; Shaw, B. E.; Milner, K. R.; Richards-Dinger, K. B.; Dieterich, J. H.

    2017-12-01

    Within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM), we are developing physics-based forecasting models for earthquake ruptures in California. We employ the 3D boundary element code RSQSim (Rate-State Earthquake Simulator of Dieterich & Richards-Dinger, 2010) to generate synthetic catalogs with tens of millions of events that span up to a million years each. This code models rupture nucleation by rate- and state-dependent friction and Coulomb stress transfer in complex, fully interacting fault systems. The Uniform California Earthquake Rupture Forecast Version 3 (UCERF3) fault and deformation models are used to specify the fault geometry and long-term slip rates. We have employed the Blue Waters supercomputer to generate long catalogs of simulated California seismicity from which we calculate the forecasting statistics for large events. We have performed probabilistic seismic hazard analysis with RSQSim catalogs that were calibrated with system-wide parameters and found a remarkably good agreement with UCERF3 (Milner et al., this meeting). We build on this analysis, comparing the conditional probabilities of sequences of large events from RSQSim and UCERF3. In making these comparisons, we consider the epistemic uncertainties associated with the RSQSim parameters (e.g., rate- and state-frictional parameters), as well as the effects of model-tuning (e.g., adjusting the RSQSim parameters to match UCERF3 recurrence rates). The comparisons illustrate how physics-based rupture simulators might assist forecasters in understanding the short-term hazards of large aftershocks and multi-event sequences associated with complex, multi-fault ruptures.

  4. Stochastic simulation of karst conduit networks

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Dowd, Peter A.; Xu, Chaoshui; Durán-Valsero, Juan José

    2012-01-01

    Karst aquifers have very high spatial heterogeneity. Essentially, they comprise a system of pipes (i.e., the network of conduits) superimposed on rock porosity and on a network of stratigraphic surfaces and fractures. This heterogeneity strongly influences the hydraulic behavior of the karst and it must be reproduced in any realistic numerical model of the karst system that is used as input to flow and transport modeling. However, the directly observed karst conduits are only a small part of the complete karst conduit system and knowledge of the complete conduit geometry and topology remains spatially limited and uncertain. Thus, there is a special interest in the stochastic simulation of networks of conduits that can be combined with fracture and rock porosity models to provide a realistic numerical model of the karst system. Furthermore, the simulated model may be of interest per se and other uses could be envisaged. The purpose of this paper is to present an efficient method for conditional and non-conditional stochastic simulation of karst conduit networks. The method comprises two stages: generation of conduit geometry and generation of topology. The approach adopted is a combination of a resampling method for generating conduit geometries from templates and a modified diffusion-limited aggregation method for generating the network topology. The authors show that the 3D karst conduit networks generated by the proposed method are statistically similar to observed karst conduit networks or to a hypothesized network model. The statistical similarity is in the sense of reproducing the tortuosity index of conduits, the fractal dimension of the network, the direction rose of directions, the Z-histogram and Ripley's K-function of the bifurcation points (which differs from a random allocation of those bifurcation points). The proposed method (1) is very flexible, (2) incorporates any experimental data (conditioning information) and (3) can easily be modified when

  5. Assessing Urban Streets Network Vulnerability against Earthquake Using GIS - Case Study: 6TH Zone of Tehran

    Science.gov (United States)

    Rastegar, A.

    2017-09-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population) and 20 km2 area (3.2% of city area), is one of the main municipal zones of Tehran (Iran center of statistics, 2006). Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  6. Using an Earthquake Simulator to Model Tremor Along a Strike Slip Fault

    Science.gov (United States)

    Cochran, E. S.; Richards-Dinger, K. B.; Kroll, K.; Harrington, R. M.; Dieterich, J. H.

    2013-12-01

    We employ the earthquake simulator, RSQSim, to investigate the conditions under which tremor occurs in the transition zone of the San Andreas fault. RSQSim is a computationally efficient method that uses rate- and state- dependent friction to simulate a wide range of event sizes for long time histories of slip [Dieterich and Richards-Dinger, 2010; Richards-Dinger and Dieterich, 2012]. RSQSim has been previously used to investigate slow slip events in Cascadia [Colella et al., 2011; 2012]. Earthquakes, tremor, slow slip, and creep occurrence are primarily controlled by the rate and state constants a and b and slip speed. We will report the preliminary results of using RSQSim to vary fault frictional properties in order to better understand rupture dynamics in the transition zone using observed characteristics of tremor along the San Andreas fault. Recent studies of tremor along the San Andreas fault provide information on tremor characteristics including precise locations, peak amplitudes, duration of tremor episodes, and tremor migration. We use these observations to constrain numerical simulations that examine the slip conditions in the transition zone of the San Andreas Fault. Here, we use the earthquake simulator, RSQSim, to conduct multi-event simulations of tremor for a strike slip fault modeled on Cholame section of the San Andreas fault. Tremor was first observed on the San Andreas fault near Cholame, California near the southern edge of the 2004 Parkfield rupture [Nadeau and Dolenc, 2005]. Since then, tremor has been observed across a 150 km section of the San Andreas with depths between 16-28 km and peak amplitudes that vary by a factor of 7 [Shelly and Hardebeck, 2010]. Tremor episodes, comprised of multiple low frequency earthquakes (LFEs), tend to be relatively short, lasting tens of seconds to as long as 1-2 hours [Horstmann et al., in review, 2013]; tremor occurs regularly with some tremor observed almost daily [Shelly and Hardebeck, 2010; Horstmann

  7. Thermomechanical earthquake cycle simulations with rate-and-state friction and nonlinear viscoelasticity

    Science.gov (United States)

    Allison, K. L.; Dunham, E. M.

    2017-12-01

    We simulate earthquake cycles on a 2D strike-slip fault, modeling both rate-and-state fault friction and an off-fault nonlinear power-law rheology. The power-law rheology involves an effective viscosity that is a function of temperature and stress, and therefore varies both spatially and temporally. All phases of the earthquake cycle are simulated, allowing the model to spontaneously generate earthquakes, and to capture frictional afterslip and postseismic and interseismic viscous flow. We investigate the interaction between fault slip and bulk viscous flow, using experimentally-based flow laws for quartz-diorite in the crust and olivine in the mantle, representative of the Mojave Desert region in Southern California. We first consider a suite of three linear geotherms which are constant in time, with dT/dz = 20, 25, and 30 K/km. Though the simulations produce very different deformation styles in the lower crust, ranging from significant interseismc fault creep to purely bulk viscous flow, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. This indicates that bulk viscous flow and interseismic fault creep load the brittle crust similarly. The simulations also predict unrealistically high stresses in the upper crust, resulting from the fact that the lower crust and upper mantle are relatively weak far from the fault, and from the relatively small role that basal tractions on the base of the crust play in the force balance of the lithosphere. We also find that for the warmest model, the effective viscosity varies by an order of magnitude in the interseismic period, whereas for the cooler models it remains roughly constant. Because the rheology is highly sensitive to changes in temperature, in addition to the simulations with constant temperature we also consider the effect of heat generation. We capture both frictional heat generation and off-fault viscous shear heating, allowing these in turn to alter the

  8. Earthquake Monitoring: SeisComp3 at the Swiss National Seismic Network

    Science.gov (United States)

    Clinton, J. F.; Diehl, T.; Cauzzi, C.; Kaestli, P.

    2011-12-01

    The Swiss Seismological Service (SED) has an ongoing responsibility to improve the seismicity monitoring capability for Switzerland. This is a crucial issue for a country with low background seismicity but where a large M6+ earthquake is expected in the next decades. With over 30 stations with spacing of ~25km, the SED operates one of the densest broadband networks in the world, which is complimented by ~ 50 realtime strong motion stations. The strong motion network is expected to grow with an additional ~80 stations over the next few years. Furthermore, the backbone of the network is complemented by broadband data from surrounding countries and temporary sub-networks for local monitoring of microseismicity (e.g. at geothermal sites). The variety of seismic monitoring responsibilities as well as the anticipated densifications of our network demands highly flexible processing software. We are transitioning all software to the SeisComP3 (SC3) framework. SC3 is a fully featured automated real-time earthquake monitoring software developed by GeoForschungZentrum Potsdam in collaboration with commercial partner, gempa GmbH. It is in its core open source, and becoming a community standard software for earthquake detection and waveform processing for regional and global networks across the globe. SC3 was originally developed for regional and global rapid monitoring of potentially tsunamagenic earthquakes. In order to fulfill the requirements of a local network recording moderate seismicity, SED has tuned configurations and added several modules. In this contribution, we present our SC3 implementation strategy, focusing on the detection and identification of seismicity on different scales. We operate several parallel processing "pipelines" to detect and locate local, regional and global seismicity. Additional pipelines with lower detection thresholds can be defined to monitor seismicity within dense subnets of the network. To be consistent with existing processing

  9. Speeding Up Network Simulations Using Discrete Time

    OpenAIRE

    Lucas, Aaron; Armbruster, Benjamin

    2013-01-01

    We develop a way of simulating disease spread in networks faster at the cost of some accuracy. Instead of a discrete event simulation (DES) we use a discrete time simulation. This aggregates events into time periods. We prove a bound on the accuracy attained. We also discuss the choice of step size and do an analytical comparison of the computational costs. Our error bound concept comes from the theory of numerical methods for SDEs and the basic proof structure comes from the theory of numeri...

  10. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  11. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    Science.gov (United States)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two

  12. Wideband simulation of earthquake ground motion by a spectrum-matching, multiple-pulse technique

    International Nuclear Information System (INIS)

    Gusev, A.; Pavlov, V.

    2006-04-01

    To simulate earthquake ground motion, we combine a multiple-point stochastic earthquake fault model and a suite of Green functions. Conceptually, our source model generalizes the classic one of Haskell (1966). At any time instant, slip occurs over a narrow strip that sweeps the fault area at a (spatially variable) velocity. This behavior defines seismic signals at lower frequencies (LF), and describes directivity effects. High-frequency (HF) behavior of source signal is defined by local slip history, assumed to be a short segment of pulsed noise. For calculations, this model is discretized as a grid of point subsources. Subsource moment rate time histories, in their LF part, are smooth pulses whose duration equals to the rise time. In their HF part, they are segments of non-Gaussian noise of similar duration. The spectral content of subsource time histories is adjusted so that the summary far-field signal follows certain predetermined spectral scaling law. The results of simulation depend on random seeds, and on particular values of such parameters as: stress drop; average and dispersion parameter for rupture velocity; rupture nucleation point; slip zone width/rise time, wavenumber-spectrum parameter defining final slip function; the degrees of non-Gaussianity for random slip rate in time, and for random final slip in space, and more. To calculate ground motion at a site, Green functions are calculated for each subsource-site pair, then convolved with subsource time functions and at last summed over subsources. The original Green function calculator for layered weakly inelastic medium is of discrete wavenumber kind, with no intrinsic limitations with respect to layer thickness or bandwidth. The simulation package can generate example motions, or used to study uncertainties of the predicted motion. As a test, realistic analogues of recorded motions in the epicentral zone of the 1994 Northridge, California earthquake were synthesized, and related uncertainties were

  13. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  14. Earthquake locations determined by the Southern Alaska seismograph network for October 1971 through May 1989

    Science.gov (United States)

    Fogleman, Kent A.; Lahr, John C.; Stephens, Christopher D.; Page, Robert A.

    1993-01-01

    This report describes the instrumentation and evolution of the U.S. Geological Survey’s regional seismograph network in southern Alaska, provides phase and hypocenter data for seismic events from October 1971 through May 1989, reviews the location methods used, and discusses the completeness of the catalog and the accuracy of the computed hypocenters. Included are arrival time data for explosions detonated under the Trans-Alaska Crustal Transect (TACT) in 1984 and 1985.The U.S. Geological Survey (USGS) operated a regional network of seismographs in southern Alaska from 1971 to the mid 1990s. The principal purpose of this network was to record seismic data to be used to precisely locate earthquakes in the seismic zones of southern Alaska, delineate seismically active faults, assess seismic risks, document potential premonitory earthquake phenomena, investigate current tectonic deformation, and study the structure and physical properties of the crust and upper mantle. A task fundamental to all of these goals was the routine cataloging of parameters for earthquakes located within and adjacent to the seismograph network.The initial network of 10 stations, 7 around Cook Inlet and 3 near Valdez, was installed in 1971. In subsequent summers additions or modifications to the network were made. By the fall of 1973, 26 stations extended from western Cook Inlet to eastern Prince William Sound, and 4 stations were located to the east between Cordova and Yakutat. A year later 20 additional stations were installed. Thirteen of these were placed along the eastern Gulf of Alaska with support from the National Oceanic and Atmospheric Administration (NOAA) under the Outer Continental Shelf Environmental Assessment Program to investigate the seismicity of the outer continental shelf, a region of interest for oil exploration. Since then the region covered by the network remained relatively fixed while efforts were made to make the stations more reliable through improved electronic

  15. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    Energy Technology Data Exchange (ETDEWEB)

    Duru, Kenneth, E-mail: kduru@stanford.edu [Department of Geophysics, Stanford University, Stanford, CA (United States); Dunham, Eric M. [Department of Geophysics, Stanford University, Stanford, CA (United States); Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA (United States)

    2016-01-15

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture

  16. Earthquake source parameters along the Hellenic subduction zone and numerical simulations of historical tsunamis in the Eastern Mediterranean

    Science.gov (United States)

    Yolsal-Çevikbilen, Seda; Taymaz, Tuncay

    2012-04-01

    We studied source mechanism parameters and slip distributions of earthquakes with Mw ≥ 5.0 occurred during 2000-2008 along the Hellenic subduction zone by using teleseismic P- and SH-waveform inversion methods. In addition, the major and well-known earthquake-induced Eastern Mediterranean tsunamis (e.g., 365, 1222, 1303, 1481, 1494, 1822 and 1948) were numerically simulated and several hypothetical tsunami scenarios were proposed to demonstrate the characteristics of tsunami waves, propagations and effects of coastal topography. The analogy of current plate boundaries, earthquake source mechanisms, various earthquake moment tensor catalogues and several empirical self-similarity equations, valid for global or local scales, were used to assume conceivable source parameters which constitute the initial and boundary conditions in simulations. Teleseismic inversion results showed that earthquakes along the Hellenic subduction zone can be classified into three major categories: [1] focal mechanisms of the earthquakes exhibiting E-W extension within the overriding Aegean plate; [2] earthquakes related to the African-Aegean convergence; and [3] focal mechanisms of earthquakes lying within the subducting African plate. Normal faulting mechanisms with left-lateral strike slip components were observed at the eastern part of the Hellenic subduction zone, and we suggest that they were probably concerned with the overriding Aegean plate. However, earthquakes involved in the convergence between the Aegean and the Eastern Mediterranean lithospheres indicated thrust faulting mechanisms with strike slip components, and they had shallow focal depths (h < 45 km). Deeper earthquakes mainly occurred in the subducting African plate, and they presented dominantly strike slip faulting mechanisms. Slip distributions on fault planes showed both complex and simple rupture propagations with respect to the variation of source mechanism and faulting geometry. We calculated low stress drop

  17. Coherence of Mach fronts during heterogeneous supershear earthquake rupture propagation: Simulations and comparison with observations

    Science.gov (United States)

    Bizzarri, A.; Dunham, Eric M.; Spudich, P.

    2010-01-01

    We study how heterogeneous rupture propagation affects the coherence of shear and Rayleigh Mach wavefronts radiated by supershear earthquakes. We address this question using numerical simulations of ruptures on a planar, vertical strike-slip fault embedded in a three-dimensional, homogeneous, linear elastic half-space. Ruptures propagate spontaneously in accordance with a linear slip-weakening friction law through both homogeneous and heterogeneous initial shear stress fields. In the 3-D homogeneous case, rupture fronts are curved owing to interactions with the free surface and the finite fault width; however, this curvature does not greatly diminish the coherence of Mach fronts relative to cases in which the rupture front is constrained to be straight, as studied by Dunham and Bhat (2008a). Introducing heterogeneity in the initial shear stress distribution causes ruptures to propagate at speeds that locally fluctuate above and below the shear wave speed. Calculations of the Fourier amplitude spectra (FAS) of ground velocity time histories corroborate the kinematic results of Bizzarri and Spudich (2008a): (1) The ground motion of a supershear rupture is richer in high frequency with respect to a subshear one. (2) When a Mach pulse is present, its high frequency content overwhelms that arising from stress heterogeneity. Present numerical experiments indicate that a Mach pulse causes approximately an ω−1.7 high frequency falloff in the FAS of ground displacement. Moreover, within the context of the employed representation of heterogeneities and over the range of parameter space that is accessible with current computational resources, our simulations suggest that while heterogeneities reduce peak ground velocity and diminish the coherence of the Mach fronts, ground motion at stations experiencing Mach pulses should be richer in high frequencies compared to stations without Mach pulses. In contrast to the foregoing theoretical results, we find no average elevation

  18. Network Structure and Community Evolution on Twitter: Human Behavior Change in Response to the 2011 Japanese Earthquake and Tsunami

    Science.gov (United States)

    Lu, Xin; Brelsford, Christa

    2014-10-01

    To investigate the dynamics of social networks and the formation and evolution of online communities in response to extreme events, we collected three datasets from Twitter shortly before and after the 2011 earthquake and tsunami in Japan. We find that while almost all users increased their online activity after the earthquake, Japanese speakers, who are assumed to be more directly affected by the event, expanded the network of people they interact with to a much higher degree than English speakers or the global average. By investigating the evolution of communities, we find that the behavior of joining or quitting a community is far from random: users tend to stay in their current status and are less likely to join new communities from solitary or shift to other communities from their current community. While non-Japanese speakers did not change their conversation topics significantly after the earthquake, nearly all Japanese users changed their conversations to earthquake-related content. This study builds a systematic framework for investigating human behaviors under extreme events with online social network data and our findings on the dynamics of networks and communities may provide useful insight for understanding how patterns of social interaction are influenced by extreme events.

  19. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    Czech Academy of Sciences Publication Activity Database

    Fojtíková, Lucia; Kristeková, M.; Málek, Jiří; Sokos, E.; Csicsay, K.; Zahradník, J.

    2016-01-01

    Roč. 20, č. 1 (2016), 93-106 ISSN 1383-4649 R&D Projects: GA ČR GAP210/12/2336 Institutional support: RVO:67985891 Keywords : Focal-mechanism uncertainty * Little Carpathians * Relative location uncertainty * Seismic network * Uncertainty mapping * Waveform inversion * Weak earthquake s Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.089, year: 2016

  20. Object-Oriented Analysis of Satellite Images Using Artificial Neural Networks for Post-Earthquake Buildings Change Detection

    Science.gov (United States)

    Khodaverdi zahraee, N.; Rastiveis, H.

    2017-09-01

    Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.

  1. Study on tsunami due to offshore earthquakes for Korea coast. Literature survey and numerical simulation on earthquake and tsunami in the Japan Sea and the East China Sea

    International Nuclear Information System (INIS)

    Matsuyama, Masafumi; Aoyagi, Yasuhira; Inoue, Daiei; Choi, Weon-Hack; Kang, Keum-Seok

    2008-01-01

    In Korea, there has been a concern on tsumami risks for the Nuclear Power Plants since the 1983 Nihonkai-Chubu earthquake tsunami. The maximum run-up height reached 4 m to north of the Ulchin nuclear power plant site. The east coast of Korea was also attacked by a few meters high tsunami generated by the 1993 Hokkaido Nansei-Oki earthquake. Both source areas of them were in the areas western off Hokkaido to the eastern margin of the Japan Sea, which remains another tsunami potential. Therefore it is necessary to study tsunami risks for coast of Korea by means of geological investigation and numerical simulation. Historical records of earthquake and tsunami in the Japan Sea were re-compiled to evaluate tsunami potential. A database of marine active faults in the Japan Sea was compiled to decide a regional potential of tsunami. Many developed reverse faults are found in the areas western off Hokkaido to the eastern margin of the Japan Sea. The authors have found no historical earthquake in the East China Sea which caused tunami observed at coast of Korea. Therefore five fault models were determined on the basis of the analysis results of historical records and recent research results of fault parameter and tunami. Tsunami heights were estimated by numerical simulation of nonlinear dispersion wave theory. The results of the simulations indicate that the tsunami heights in these cases are less than 0.25 m along the coast of Korea, and the tsunami risk by these assumed faults does not lead to severe impact. It is concluded that tsunami occurred in the areas western off Hokkaido to the eastern margin of the Japan Sea leads the most significant impact to Korea consequently. (author)

  2. Neural Network Emulation of Reionization Simulations

    Science.gov (United States)

    Schmit, Claude J.; Pritchard, Jonathan R.

    2018-05-01

    Next generation radio experiments such as LOFAR, HERA and SKA are expected to probe the Epoch of Reionization and claim a first direct detection of the cosmic 21cm signal within the next decade. One of the major challenges for these experiments will be dealing with enormous incoming data volumes. Machine learning is key to increasing our data analysis efficiency. We consider the use of an artificial neural network to emulate 21cmFAST simulations and use it in a Bayesian parameter inference study. We then compare the network predictions to a direct evaluation of the EoR simulations and analyse the dependence of the results on the training set size. We find that the use of a training set of size 100 samples can recover the error contours of a full scale MCMC analysis which evaluates the model at each step.

  3. Numerical simulation of co-seismic deformation of 2011 Japan Mw9. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Zhang Keliang

    2011-08-01

    Full Text Available Co-seismic displacements associated with the Mw9. 0 earthquake on March 11, 2011 in Japan are numerically simulated on the basis of a finite-fault dislocation model with PSGRN/PSCMP software. Compared with the inland GPS observation, 90% of the computed eastward, northward and vertical displacements have residuals less than 0.10 m, suggesting that the simulated results can be, to certain extent, used to demonstrate the co-seismic deformation in the near field. In this model, the maximum eastward displacement increases from 6 m along the coast to 30 m near the epicenter, where the maximum southward displacement is 13 m. The three-dimensional display shows that the vertical displacement reaches a maximum uplift of 14.3 m, which is comparable to the tsunami height in the near-trench region. The maximum subsidence is 5.3 m.

  4. Detecting earthquakes over a seismic network using single-station similarity measures

    Science.gov (United States)

    Bergen, Karianne J.; Beroza, Gregory C.

    2018-06-01

    New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected moveout. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to 2 weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalogue (including 95 per cent of the catalogue events), and less than 1 per cent of these candidate events are false detections.

  5. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    Science.gov (United States)

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  6. Earthquake cycle simulations with rate-and-state friction and power-law viscoelasticity

    Science.gov (United States)

    Allison, Kali L.; Dunham, Eric M.

    2018-05-01

    We simulate earthquake cycles with rate-and-state fault friction and off-fault power-law viscoelasticity for the classic 2D antiplane shear problem of a vertical, strike-slip plate boundary fault. We investigate the interaction between fault slip and bulk viscous flow with experimentally-based flow laws for quartz-diorite and olivine for the crust and mantle, respectively. Simulations using three linear geotherms (dT/dz = 20, 25, and 30 K/km) produce different deformation styles at depth, ranging from significant interseismic fault creep to purely bulk viscous flow. However, they have almost identical earthquake recurrence interval, nucleation depth, and down-dip coseismic slip limit. Despite these similarities, variations in the predicted surface deformation might permit discrimination of the deformation mechanism using geodetic observations. Additionally, in the 25 and 30 K/km simulations, the crust drags the mantle; the 20 K/km simulation also predicts this, except within 10 km of the fault where the reverse occurs. However, basal tractions play a minor role in the overall force balance of the lithosphere, at least for the flow laws used in our study. Therefore, the depth-integrated stress on the fault is balanced primarily by shear stress on vertical, fault-parallel planes. Because strain rates are higher directly below the fault than far from it, stresses are also higher. Thus, the upper crust far from the fault bears a substantial part of the tectonic load, resulting in unrealistically high stresses. In the real Earth, this might lead to distributed plastic deformation or formation of subparallel faults. Alternatively, fault pore pressures in excess of hydrostatic and/or weakening mechanisms such as grain size reduction and thermo-mechanical coupling could lower the strength of the ductile fault root in the lower crust and, concomitantly, off-fault upper crustal stresses.

  7. Fast 3D seismic wave simulations of 24 August 2016 Mw 6.0 central Italy earthquake for visual communication

    Directory of Open Access Journals (Sweden)

    Emanuele Casarotti

    2016-12-01

    Full Text Available We present here the first application of the fast reacting framework for 3D simulations of seismic wave propagation generated by earthquakes in the Italian region with magnitude Mw 5. The driven motivation is to offer a visualization of the natural phenomenon to the general public but also to provide preliminary modeling to expert and civil protection operators. We report here a description of this framework during the emergency of 24 August 2016 Mw 6.0 central Italy Earthquake, a discussion on the accuracy of the simulation for this seismic event and a preliminary critical analysis of the visualization structure and of the reaction of the public.

  8. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    Science.gov (United States)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and

  9. The results of the pilot project in Georgia to install a network of electromagnetic radiation before the earthquake

    Science.gov (United States)

    Machavariani, Kakhaber; Khazaradze, Giorgi; Turazashvili, Ioseb; Kachakhidze, Nino; Kachakhidze, Manana; Gogoberidze, Vitali

    2016-04-01

    The world's scientific literature recently published many very important and interesting works of VLF / LF electromagnetic emissions, which is observed in the process of earthquake preparation. This works reliable earthquake prediction in terms of trends. Because, Georgia is located in Trans Asian earthquake zone, VLF / LF electromagnetic emissions network are essential. In this regard, it was possible to take first steps. It is true that our university has Shota Rustaveli National Science Foundation № DI / 21 / 9-140 / 13 grant, which included the installation of a receiver in Georgia, but failed due to lack of funds to buy this device. However, European friends helped us (Prof. Dr. PF Biagi and Prof. Dr. Aydın BÜYÜKSARAÇ) and made possible the installation of a receiver. Turkish scientists expedition in Georgia was organized in August 2015. They brought with them VLF / LF electromagnetic emissions receiver and together with Georgian scientists install near Tbilisi. The station was named GEO-TUR. It should be noted that Georgia was involved in the work of the European network. It is possible to completely control the earthquake in Georgia in terms of electromagnetic radiation. This enables scientists to obtain the relevant information not only on the territory of our country, but also on seismically active European countries as well. In order to maintain and develop our country in this new direction, it is necessary to keep independent group of scientists who will learn electromagnetic radiation ahead of an earthquake in Georgia. At this stage, we need to remedy this shortcoming, it is necessary and appropriate specialists to Georgia to engage in a joint international research. The work is carried out in the frame of grant (DI/21/9-140/13 „Pilot project of before earthquake detected Very Low Frequency/Low Frequency electromagnetic emission network installation in Georgia") by financial support of Shota Rustaveli National Science Foundation.

  10. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  11. Numerical simulation of multiple-physical fields coupling for thermal anomalies before earthquakes: A case study of the 2008 Wenchuan Ms8.0 earthquake in southwest China

    Science.gov (United States)

    Deng, Z.

    2017-12-01

    It has become a highly focused issue that thermal anomalies appear before major earthquakes. There are various hypotheses about the mechanism of thermal anomalies. Because of lacking of enough evidences, the mechanism is still require to be further researched. Gestation and occurrence of a major earthquake is related with the interaction of multi-physical fields. The underground fluid surging out the surface is very likely to be the reason for the thermal anomaly. This study tries to answer some question, such as how the geothermal energy transfer to the surface, and how the multiple-physical fields interacted. The 2008 Wenchuan Ms8.0 earthquake, is one of the largest evens in the last decade in China mainland. Remote sensing studies indicate that distinguishable thermal anomalies occurred several days before the earthquake. The heat anomaly value is more than 3 times the average in normal time and distributes along the Longmen Shan fault zone. Based on geological and geophysical data, 2D dynamic model of coupled stress, seepage and thermal fields (HTM model) is constructed. Then using the COMSOL multi-physics filed software, this work tries to reveal the generation process and distribution patterns of thermal anomalies prior to thrust-type major earthquakes. The simulation get the results: (1)Before the micro rupture, with the increase of compression, the heat current flows to the fault in the footwall on the whole, while in the hanging wall of the fault, particularly near the ground surface, the heat flow upward. In the fault zone, heat flow upward along the fracture surface, heat flux in the fracture zone is slightly larger than the wall rock;, but the value is all very small. (2)After the occurrence of the micro fracture, the heat flow rapidly collects to the faults. In the fault zones, the heat flow accelerates up along the fracture surfaces, the heat flux increases suddenly, and the vertical heat flux reaches to the maximum. The heat flux in the 3 fracture

  12. The network construction of CSELF for earthquake monitoring and its preliminary observation

    Science.gov (United States)

    Tang, J.; Zhao, G.; Chen, X.; Bing, H.; Wang, L.; Zhan, Y.; Xiao, Q.; Dong, Z.

    2017-12-01

    The Electromagnetic (EM) anomaly in short-term earthquake precursory is most sensitive physical phenomena. Scientists believe that EM monitoring for earthquake is one of the most promising means of forecasting. However, existing ground-base EM observation confronted with increasing impact cultural noises, and the lack of a frequency range of higher than 1Hz observations. Control source of extremely low frequency (CSELF) EM is a kind of good prospective new approach. It not only has many advantages with high S/N ratio, large coverage area, probing depth ect., thereby facilitating the identification and capture anomaly signal, and it also can be used to study the electromagnetic field variation and to study the crustal medium changes of the electric structure.The first CSELF EM network for earthquake precursory monitoring with 30 observatories in China has been constructed. The observatories distribute in Beijing surrounding area and in the southern part of North-South Seismic Zone. GMS-07 system made by Metronix is equipped at each station. The observation mixed CSELF and nature source, that is, if during the control source is off transmitted, the nature source EM signal will be recorded. In genernal, there are 3 5 frequencies signals in the 0.1-300Hz frequency band will be transmit in every morning and evening in a fixed time (length 2 hours). Besides time, natural field to extend the frequency band (0.001 1000 Hz) will be observed by using 3 sample frequencies, 4096Hz sampling rate for HF, 256Hz for MF and 16Hz for LF. The low frequency band records continuously all-day and the high and medium frequency band use a slices record, the data records by cycling acquisition in every 10 minutes with length of about 4 to 8 seconds and 64 to 128 seconds , respectively. All the data is automatically processed by server installed in the observatory. The EDI file including EM field spectrums and MT responses and time series files will be sent the data center by internet

  13. Mesoscopic Simulations of Crosslinked Polymer Networks

    Science.gov (United States)

    Megariotis, Grigorios; Vogiatzis, Georgios G.; Schneider, Ludwig; Müller, Marcus; Theodorou, Doros N.

    2016-08-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1’4-polyisoprene’ is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn segments. From the thermodynamic point of view, the system is described by a Helmholtz free-energy containing contributions from entropic springs between successive beads along a chain, slip-springs representing entanglements between beads on different chains, and non-bonded interactions. The methodology is employed for the calculation of the stress relaxation function from simulations of several microseconds at equilibrium, as well as for the prediction of stress-strain curves of crosslinked polymer networks under deformation.

  14. Ground-Motion Simulations of the 2008 Ms8.0 Wenchuan, China, Earthquake Using Empirical Green's Function Method

    Science.gov (United States)

    Zhang, W.; Zhang, Y.; Yao, X.

    2010-12-01

    On May 12, 2008, a huge earthquake with magnitude Ms8.0 occurred in the Wenhuan, Sichuan Province of China. This event was the most devastating earthquake in the mainland of China since the 1976 M7.8 Tangshan earthquake. It resulted in tremendous losses of life and property. There were about 90,000 persons killed. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and “quake lakes” which formed by landslide-induced reservoirs. This earthquake occurred along the Longmenshan fault, as the result of motion on a northeast striking reverse fault or thrust fault on the northwestern margin of the Sichuan Basin. The earthquake's epicenter and focal-mechanism are consistent with it having occurred as the result of movement on the Longmenshan fault or a tectonically related fault. The earthquake reflects tectonic stresses resulting from the convergence of crustal material slowly moving from the high Tibetan Plateau, to the west, against strong crust underlying the Sichuan Basin and southeastern China. In this study, we simulate the near-field strong ground motions of this great event based on the empirical Green’s function method (EGF). Referring to the published inversion source models, at first, we assume that there are three asperities on the rupture area and choose three different small events as the EGFs. Then, we identify the parameters of the source model using a genetic algorithm (GA). We calculate the synthetic waveforms based on the obtained source model and compare with the observed records. Our result shows that for most of the synthetic waveforms agree very well with the observed ones. The result proves the validity and the stability of the method. Finally, we forward the near-field strong ground motions near the source region and try to explain the damage distribution caused by the great earthquake.

  15. Quasi-dynamic versus fully dynamic simulations of earthquakes and aseismic slip with and without enhanced coseismic weakening

    Science.gov (United States)

    Thomas, Marion Y.; Lapusta, Nadia; Noda, Hiroyuki; Avouac, Jean-Philippe

    2014-03-01

    Physics-based numerical simulations of earthquakes and slow slip, coupled with field observations and laboratory experiments, can, in principle, be used to determine fault properties and potential fault behaviors. Because of the computational cost of simulating inertial wave-mediated effects, their representation is often simplified. The quasi-dynamic (QD) approach approximately accounts for inertial effects through a radiation damping term. We compare QD and fully dynamic (FD) simulations by exploring the long-term behavior of rate-and-state fault models with and without additional weakening during seismic slip. The models incorporate a velocity-strengthening (VS) patch in a velocity-weakening (VW) zone, to consider rupture interaction with a slip-inhibiting heterogeneity. Without additional weakening, the QD and FD approaches generate qualitatively similar slip patterns with quantitative differences, such as slower slip velocities and rupture speeds during earthquakes and more propensity for rupture arrest at the VS patch in the QD cases. Simulations with additional coseismic weakening produce qualitatively different patterns of earthquakes, with near-periodic pulse-like events in the FD simulations and much larger crack-like events accompanied by smaller events in the QD simulations. This is because the FD simulations with additional weakening allow earthquake rupture to propagate at a much lower level of prestress than the QD simulations. The resulting much larger ruptures in the QD simulations are more likely to propagate through the VS patch, unlike for the cases with no additional weakening. Overall, the QD approach should be used with caution, as the QD simulation results could drastically differ from the true response of the physical model considered.

  16. Simulated earthquake testing of naturally aged C and D LCU-13 station battery cells

    International Nuclear Information System (INIS)

    Tulk, J.D.; Black, D.A.; Janis, W.J.; Royce, C.J.

    1985-03-01

    A sample of 10-year-old lead-acid storage batteries from the North Anna Nuclear Power Station (Virginia Electric and Power Company) were tested on a shaker table. Seven cells were subjected to simulated earthquakes with a ZPA of approximately 1.5 g. All seven delivered uninterrupted power during the shaker tests and were able to pass a post-seismic capacity test. Two cells were shaken to higher intensities (ZPA approximately equal to 2 g). These cells provided uninterrupted power during the shaker tests, but had post-seismic capacities that were below the required level for Class1E battery cells. After the tests, several cells were disassembled and examined. Internal components were in good condition with limited oxidization and plate cracking

  17. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    Science.gov (United States)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  18. The ordered network structure of M {>=} 6 strong earthquakes and its prediction in the Jiangsu-South Yellow Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Men, Ke-Pei [Nanjing Univ. of Information Science and Technology (China). College of Mathematics and Statistics; Cui, Lei [California Univ., Santa Barbara, CA (United States). Applied Probability and Statistics Dept.

    2013-05-15

    The the Jiangsu-South Yellow Sea region is one of the key seismic monitoring defence areas in the eastern part of China. Since 1846, M {>=} 6 strong earthquakes have showed an obvious commensurability and orderliness in this region. The main orderly values are 74 {proportional_to} 75 a, 57 {proportional_to} 58 a, 11 {proportional_to} 12 a, and 5 {proportional_to} 6 a, wherein 74 {proportional_to} 75 a and 57 {proportional_to} 58 a with an outstanding predictive role. According to the information prediction theory of Wen-Bo Weng, we conceived the M {>=} 6 strong earthquake ordered network structure in the South Yellow Sea and the whole region. Based on this, we analyzed and discussed the variation of seismicity in detail and also made a trend prediction of M {>=} 6 strong earthquakes in the future. The results showed that since 1998 it has entered into a new quiet episode which may continue until about 2042; and the first M {>=} 6 strong earthquake in the next active episode will probably occur in 2053 pre and post, with the location likely in the sea area of the South Yellow Sea; also, the second and the third ones or strong earthquake swarm in the future will probably occur in 2058 and 2070 pre and post. (orig.)

  19. Analysis of Time Delay Simulation in Networked Control System

    OpenAIRE

    Nyan Phyo Aung; Zaw Min Naing; Hla Myo Tun

    2016-01-01

    The paper presents a PD controller for the Networked Control Systems (NCS) with delay. The major challenges in this networked control system (NCS) are the delay of the data transmission throughout the communication network. The comparative performance analysis is carried out for different delays network medium. In this paper, simulation is carried out on Ac servo motor control system using CAN Bus as communication network medium. The True Time toolbox of MATLAB is used for simulation to analy...

  20. Learning in innovation networks: Some simulation experiments

    Science.gov (United States)

    Gilbert, Nigel; Ahrweiler, Petra; Pyka, Andreas

    2007-05-01

    According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning.

  1. Mobile-ip Aeronautical Network Simulation Study

    Science.gov (United States)

    Ivancic, William D.; Tran, Diepchi T.

    2001-01-01

    NASA is interested in applying mobile Internet protocol (mobile-ip) technologies to its space and aeronautics programs. In particular, mobile-ip will play a major role in the Advanced Aeronautic Transportation Technology (AATT), the Weather Information Communication (WINCOMM), and the Small Aircraft Transportation System (SATS) aeronautics programs. This report presents the results of a simulation study of mobile-ip for an aeronautical network. The study was performed to determine the performance of the transmission control protocol (TCP) in a mobile-ip environment and to gain an understanding of how long delays, handoffs, and noisy channels affect mobile-ip performance.

  2. Earthquake-induced landslide-susceptibility mapping using an artificial neural network

    Directory of Open Access Journals (Sweden)

    S. Lee

    2006-01-01

    Full Text Available The purpose of this study was to apply and verify landslide-susceptibility analysis techniques using an artificial neural network and a Geographic Information System (GIS applied to Baguio City, Philippines. The 16 July 1990 earthquake-induced landslides were studied. Landslide locations were identified from interpretation of aerial photographs and field survey, and a spatial database was constructed from topographic maps, geology, land cover and terrain mapping units. Factors that influence landslide occurrence, such as slope, aspect, curvature and distance from drainage were calculated from the topographic database. Lithology and distance from faults were derived from the geology database. Land cover was identified from the topographic database. Terrain map units were interpreted from aerial photographs. These factors were used with an artificial neural network to analyze landslide susceptibility. Each factor weight was determined by a back-propagation exercise. Landslide-susceptibility indices were calculated using the back-propagation weights, and susceptibility maps were constructed from GIS data. The susceptibility map was compared with known landslide locations and verified. The demonstrated prediction accuracy was 93.20%.

  3. Primitive chain network simulations of probe rheology.

    Science.gov (United States)

    Masubuchi, Yuichi; Amamoto, Yoshifumi; Pandey, Ankita; Liu, Cheng-Yang

    2017-09-27

    Probe rheology experiments, in which the dynamics of a small amount of probe chains dissolved in immobile matrix chains is discussed, have been performed for the development of molecular theories for entangled polymer dynamics. Although probe chain dynamics in probe rheology is considered hypothetically as single chain dynamics in fixed tube-shaped confinement, it has not been fully elucidated. For instance, the end-to-end relaxation of probe chains is slower than that for monodisperse melts, unlike the conventional molecular theories. In this study, the viscoelastic and dielectric relaxations of probe chains were calculated by primitive chain network simulations. The simulations semi-quantitatively reproduced the dielectric relaxation, which reflects the effect of constraint release on the end-to-end relaxation. Fair agreement was also obtained for the viscoelastic relaxation time. However, the viscoelastic relaxation intensity was underestimated, possibly due to some flaws in the model for the inter-chain cross-correlations between probe and matrix chains.

  4. Chain networking revealed by molecular dynamics simulation

    Science.gov (United States)

    Zheng, Yexin; Tsige, Mesfin; Wang, Shi-Qing

    Based on Kremer-Grest model for entangled polymer melts, we demonstrate how the response of a polymer glass depends critically on the chain length. After quenching two melts of very different chain lengths (350 beads per chain and 30 beads per chain) into deeply glassy states, we subject them to uniaxial extension. Our MD simulations show that the glass of long chains undergoes stable necking after yielding whereas the system of short chains is unable to neck and breaks up after strain localization. During ductile extension of the polymer glass made of long chain significant chain tension builds up in the load-bearing strands (LBSs). Further analysis is expected to reveal evidence of activation of the primary structure during post-yield extension. These results lend support to the recent molecular model 1 and are the simulations to demonstrate the role of chain networking. This work is supported, in part, by a NSF Grant (DMR-EAGER-1444859)

  5. Modeling And Simulation Of Multimedia Communication Networks

    Science.gov (United States)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  6. Slip reactivation during the 2011 Tohoku earthquake: Dynamic rupture and ground motion simulations

    Science.gov (United States)

    Galvez, P.; Dalguer, L. A.

    2013-12-01

    The 2011 Mw9 Tohoku earthquake generated such as vast geophysical data that allows studying with an unprecedented resolution the spatial-temporal evolution of the rupture process of a mega thrust event. Joint source inversion of teleseismic, near-source strong motion and coseismic geodetic data , e.g [Lee et. al, 2011], reveal an evidence of slip reactivation process at areas of very large slip. The slip of snapshots of this source model shows that after about 40 seconds the big patch above to the hypocenter experienced an additional push of the slip (reactivation) towards the trench. These two possible repeating slip exhibited by source inversions can create two waveform envelops well distinguished in the ground motion pattern. In fact seismograms of the KiK-Net Japanese network contained this pattern. For instance a seismic station around Miyagi (MYGH10) has two main wavefronts separated between them by 40 seconds. A possible physical mechanism to explain the slip reactivation could be a thermal pressurization process occurring in the fault zone. In fact, Kanamori & Heaton, (2000) proposed that for large earthquakes frictional melting and fluid pressurization can play a key role of the rupture dynamics of giant earthquakes. If fluid exists in a fault zone, an increase of temperature can rise up the pore pressure enough to significantly reduce the frictional strength. Therefore, during a large earthquake the areas of big slip persuading strong thermal pressurization may result in a second drop of the frictional strength after reaching a certain value of slip. Following this principle, we adopt for slip weakening friction law and prescribe a certain maximum slip after which the friction coefficient linearly drops down again. The implementation of this friction law has been done in the latest unstructured spectral element code SPECFEM3D, Peter et. al. (2012). The non-planar subduction interface has been taken into account and place on it a big asperity patch inside

  7. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  8. Event-based simulation of networks with pulse delayed coupling

    Science.gov (United States)

    Klinshov, Vladimir; Nekorkin, Vladimir

    2017-10-01

    Pulse-mediated interactions are common in networks of different nature. Here we develop a general framework for simulation of networks with pulse delayed coupling. We introduce the discrete map governing the dynamics of such networks and describe the computation algorithm for its numerical simulation.

  9. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    Science.gov (United States)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  10. Use of Ground Motion Simulations of a Historical Earthquake for the Assessment of Past and Future Urban Risks

    Science.gov (United States)

    Kentel, E.; Çelik, A.; karimzadeh Naghshineh, S.; Askan, A.

    2017-12-01

    Erzincan city located in the Eastern part of Turkey at the conjunction of three active faults is one of the most hazardous regions in the world. In addition to several historical events, this city has experienced one of the largest earthquakes during the last century: The 27 December 1939 (Ms=8.0) event. With limited knowledge of the tectonic structure by then, the city center was relocated to the North after the 1939 earthquake by almost 5km, indeed closer to the existing major strike slip fault. This decision coupled with poor construction technologies, led to severe damage during a later event that occurred on 13 March 1992 (Mw=6.6). The 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms whereas the 1992 event was only recorded by 3 nearby stations. There are empirical isoseismal maps from both events indicating indirectly the spatial distribution of the damage. In this study, we focus on this region and present a multidisciplinary approach to discuss the different components of uncertainties involved in the assessment and mitigation of seismic risk in urban areas. For this initial attempt, ground motion simulation of the 1939 event is performed to obtain the anticipated ground motions and shaking intensities. Using these quantified results along with the spatial distribution of the observed damage, the relocation decision is assessed and suggestions are provided for future large earthquakes to minimize potential earthquake risks.

  11. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  12. Far-field tsunami of 2017 Mw 8.1 Tehuantepec, Mexico earthquake recorded by Chilean tide gauge network: Implications for tsunami warning systems

    Science.gov (United States)

    González-Carrasco, J. F.; Benavente, R. F.; Zelaya, C.; Núñez, C.; Gonzalez, G.

    2017-12-01

    The 2017 Mw 8.1, Tehuantepec earthquake generated a moderated tsunami, which was registered in near-field tide gauges network activating a tsunami threat state for Mexico issued by PTWC. In the case of Chile, the forecast of tsunami waves indicate amplitudes less than 0.3 meters above the tide level, advising an informative state of threat, without activation of evacuation procedures. Nevertheless, during sea level monitoring of network we detect wave amplitudes (> 0.3 m) indicating a possible change of threat state. Finally, NTWS maintains informative level of threat based on mathematical filtering analysis of sea level records. After 2010 Mw 8.8, Maule earthquake, the Chilean National Tsunami Warning System (NTWS) has increased its observational capabilities to improve early response. Most important operational efforts have focused on strengthening tide gauge network for national area of responsibility. Furthermore, technological initiatives as Integrated Tsunami Prediction and Warning System (SIPAT) has segmented the area of responsibility in blocks to focus early warning and evacuation procedures on most affected coastal areas, while maintaining an informative state for distant areas of near-field earthquake. In the case of far-field events, NTWS follow the recommendations proposed by Pacific Tsunami Warning Center (PTWC), including a comprehensive monitoring of sea level records, such as tide gauges and DART (Deep-Ocean Assessment and Reporting of Tsunami) buoys, to evaluate the state of tsunami threat in the area of responsibility. The main objective of this work is to analyze the first-order physical processes involved in the far-field propagation and coastal impact of tsunami, including implications for decision-making of NTWS. To explore our main question, we construct a finite-fault model of the 2017, Mw 8.1 Tehuantepec earthquake. We employ the rupture model to simulate a transoceanic tsunami modeled by Neowave2D. We generate synthetic time series at

  13. Programmable multi-node quantum network design and simulation

    Science.gov (United States)

    Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.

    2016-05-01

    Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.

  14. Flood Simulation Using WMS Model in Small Watershed after Strong Earthquake -A Case Study of Longxihe Watershed, Sichuan province, China

    Science.gov (United States)

    Guo, B.

    2017-12-01

    Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

  15. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    Science.gov (United States)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica

  16. Long-period Ground Motion Simulation in the Osaka Basin during the 2011 Great Tohoku Earthquake

    Science.gov (United States)

    Iwata, T.; Kubo, H.; Asano, K.; Sato, K.; Aoi, S.

    2014-12-01

    Large amplitude long-period ground motions (1-10s) with long duration were observed in the Osaka sedimentary basin during the 2011 Tohoku earthquake (Mw9.0) and its aftershock (Ibaraki-Oki, Mw7.7), which is about 600 km away from the source regions. Sato et al. (2013) analyzed strong ground motion records from the source region to the Osaka basin and showed the following characteristics. (1) In the period range of 1 to 10s, the amplitude of horizontal components of the ground motion at the site-specific period is amplified in the Osaka basin sites. The predominant period is about 7s in the bay area where the largest pSv were observed. (2) The velocity Fourier amplitude spectra with their predominant period of around 7s are observed at the bedrock sites surrounding the Osaka basin. Those characteristics were observed during both of the mainshock and the largest aftershock. Therefore, large long-period ground motions in the Osaka basin are generated by the combination of propagation-path and basin effects. They simulated ground motions due to the largest aftershock as a simple point source model using three-dimensional FDM (GMS; Aoi and Fujiwara, 1999). They used a three-dimensional velocity structure based on the Japan Integrated Velocity Structure Model (JIVSM, Koketsu et al., 2012), with the minimum effective period of the computation of 3s. Their simulation result reproduced the observation characteristics well and it validates the applicability of the JIVSM for the long period ground motion simulation. In this study, we try to simulate long-period ground motions during the mainshock. The source model we used for the simulation is based on the SMGA model obtained by Asano and Iwata (2012). We succeed to simulate long-period ground motion propagation from Kanto area to the Osaka basin fairly well. The long-period ground motion simulations with the several Osaka basin velocity structure models are done for improving the model applicability. We used strong motion

  17. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  18. GIS Based System for Post-Earthquake Crisis Managment Using Cellular Network

    Science.gov (United States)

    Raeesi, M.; Sadeghi-Niaraki, A.

    2013-09-01

    Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS) can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post-earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post-earthquake crisis.

  19. GIS BASED SYSTEM FOR POST-EARTHQUAKE CRISIS MANAGMENT USING CELLULAR NETWORK

    Directory of Open Access Journals (Sweden)

    M. Raeesi

    2013-09-01

    Full Text Available Earthquakes are among the most destructive natural disasters. Earthquakes happen mainly near the edges of tectonic plates, but they may happen just about anywhere. Earthquakes cannot be predicted. Quick response after disasters, like earthquake, decreases loss of life and costs. Massive earthquakes often cause structures to collapse, trapping victims under dense rubble for long periods of time. After the earthquake and destroyed some areas, several teams are sent to find the location of the destroyed areas. The search and rescue phase usually is maintained for many days. Time reduction for surviving people is very important. A Geographical Information System (GIS can be used for decreasing response time and management in critical situations. Position estimation in short period of time time is important. This paper proposes a GIS based system for post–earthquake disaster management solution. This system relies on several mobile positioning methods such as cell-ID and TA method, signal strength method, angel of arrival method, time of arrival method and time difference of arrival method. For quick positioning, the system can be helped by any person who has a mobile device. After positioning and specifying the critical points, the points are sent to a central site for managing the procedure of quick response for helping. This solution establishes a quick way to manage the post–earthquake crisis.

  20. Characterization of Background Traffic in Hybrid Network Simulation

    National Research Council Canada - National Science Library

    Lauwens, Ben; Scheers, Bart; Van de Capelle, Antoine

    2006-01-01

    .... Two approaches are common: discrete event simulation and fluid approximation. A discrete event simulation generates a huge amount of events for a full-blown battlefield communication network resulting in a very long runtime...

  1. S-net : Construction of large scale seafloor observatory network for tsunamis and earthquakes along the Japan Trench

    Science.gov (United States)

    Mochizuki, M.; Uehira, K.; Kanazawa, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Nakamura, T.; Shinohara, M.; Yamada, T.

    2017-12-01

    NIED has launched the project of constructing a seafloor observatory network for tsunamis and earthquakes after the occurrence of the 2011 Tohoku Earthquake to enhance reliability of early warnings of tsunamis and earthquakes. The observatory network was named "S-net". The S-net project has been financially supported by MEXT.The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,500 km. The S-net covers the focal region of the 2011 Tohoku Earthquake and its vicinity regions. Each observatory equips two units of a high sensitive pressure gauges as a tsunami meter and four sets of three-component seismometers. The S-net is composed of six segment networks. Five of six segment networks had been already installed. Installation of the last segment network covering the outer rise area have been finally finished by the end of FY2016. The outer rise segment has special features like no other five segments of the S-net. Those features are deep water and long distance. Most of 25 observatories on the outer rise segment are located at the depth of deeper than 6,000m WD. Especially, three observatories are set on the seafloor of deeper than about 7.000m WD, and then the pressure gauges capable of being used even at 8,000m WD are equipped on those three observatories. Total length of the submarine cables of the outer rise segment is about two times longer than those of the other segments. The longer the cable system is, the higher voltage supply is needed, and thus the observatories on the outer rise segment have high withstanding voltage characteristics. We employ a dispersion management line of a low loss formed by combining a plurality of optical fibers for the outer rise segment cable, in order to achieve long-distance, high-speed and large-capacity data transmission Installation of the outer rise segment was finished and then full-scale operation of S-net has started

  2. BioNessie - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X.; Jiang, J.; Ajayi, O.; Gu, X.; Gilbert, D.; Sinnott, R.O.

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations.

  3. 3-D Spontaneous Rupture Simulations of the 2016 Kumamoto, Japan, Earthquake

    Science.gov (United States)

    Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi

    2017-04-01

    We investigated the M7.3 Kumamoto, Japan, earthquake to illuminate why and how the rupture of the main shock propagated successfully by 3-D dynamic rupture simulations, assuming a complicated fault geometry estimated based on the distributions of aftershocks. The M7.3 main shock occurred along the Futagawa and Hinagu faults. A few days before, three M6-class foreshocks occurred. Their hypocenters were located along by the Hinagu and Futagawa faults and their focal mechanisms were similar to those of the main shock; therefore, an extensive stress shadow can have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of relocated aftershock hypocenters. Then, we evaluated static stress changes on the main shock fault plane due to the occurrence of the three foreshocks assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that the hypocenter of the main shock is located on the region with positive Coulomb failure stress change (ΔCFS) while ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the rupture propagating toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we conducted 3-D dynamic rupture simulations by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges to reproduce the rupture propagation of the main shock consistent with those revealed by seismic waveform analyses. We also demonstrated that the free surface encouraged

  4. The numerical simulation study of the dynamic evolutionary processes in an earthquake cycle on the Longmen Shan Fault

    Science.gov (United States)

    Tao, Wei; Shen, Zheng-Kang; Zhang, Yong

    2016-04-01

    concentration areas in the model, one is located in the mid and upper crust on the hanging wall where the strain energy could be released by permanent deformation like folding, and the other lies in the deep part of the fault where the strain energy could be released by earthquakes. (5) The whole earthquake dynamic process could be clearly reflected by the evolutions of the strain energy increments on the stages of the earthquake cycle. In the inter-seismic period, the strain energy accumulates relatively slowly; prior to the earthquake, the fault is locking and the strain energy accumulates fast, and some of the strain energy is released on the upper crust on the hanging wall of the fault. In coseismic stage, the strain energy is released fast along the fault. In the poseismic stage, the slow accumulation process of strain recovers rapidly as that in the inerseismic period in around one hundred years. The simulation study in this thesis would help better understand the earthquake dynamic process.

  5. Visualization of strong around motion calculated from the numerical simulation of Hyogo-ken Nanbu earthquake; Suchi simulation de miru Hyogoken nanbu jishin no kyoshindo

    Energy Technology Data Exchange (ETDEWEB)

    Furumura, T [Hokkaido Univ. of Education, Sapporo (Japan); Koketsu, K [The University of Tokyo, Tokyo (Japan). Earthquake Research Institute

    1996-10-01

    Hyogo-ken Nanbu earthquake with a focus in the Akashi straits has given huge earthquake damages in and around Awaji Island and Kobe City in 1995. It is clear that the basement structure, which is steeply deepened at Kobe City from Rokko Mountains towards the coast, and the focus under this related closely to the local generation of strong ground motion. Generation process of the strong ground motion was discussed using 2D and 3D numerical simulation methods. The 3D pseudospectral method was used for the calculation. Space of 51.2km{times}25.6km{times}25.6km was selected for the calculation. This space was discretized with the lattice interval of 200m. Consequently, it was found that the basement structure with a steeply deepened basement, soft and weak geological structure thickly deposited on the basement, and earthquake faults running under the boundary of base rock and sediments related greatly to the generation of strong ground motion. Numerical simulation can be expected to predict the strong ground motion by shallow earthquakes. 9 refs., 7 figs.

  6. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  7. Detection of Repeating Earthquakes within the Cascadia Subduction Zone Using 2013-2014 Cascadia Initiative Amphibious Network Data

    Science.gov (United States)

    Kenefic, L.; Morton, E.; Bilek, S.

    2017-12-01

    It is well known that subduction zones create the largest earthquakes in the world, like the magnitude 9.5 Chile earthquake in 1960, or the more recent 9.1 magnitude Japan earthquake in 2011, both of which are in the top five largest earthquakes ever recorded. However, off the coast of the Pacific Northwest region of the U.S., the Cascadia subduction zone (CSZ) remains relatively quiet and modern seismic instruments have not recorded earthquakes of this size in the CSZ. The last great earthquake, a magnitude 8.7-9.2, occurred in 1700 and is constrained by written reports of the resultant tsunami in Japan and dating a drowned forest in the U.S. Previous studies have suggested the margin is most likely segmented along-strike. However, variations in frictional conditions in the CSZ fault zone are not well known. Geodetic modeling indicates that the locked seismogenic zone is likely completely offshore, which may be too far from land seismometers to adequately detect related seismicity. Ocean bottom seismometers, as part of the Cascadia Initiative Amphibious Network, were installed directly above the inferred seismogenic zone, which we use to better detect small interplate seismicity. Using the subspace detection method, this study looks to find new seismogenic zone earthquakes. This subspace detection method uses multiple previously known event templates concurrently to scan through continuous seismic data. Template events that make up the subspace are chosen from events in existing catalogs that likely occurred along the plate interface. Corresponding waveforms are windowed on the nearby Cascadia Initiative ocean bottom seismometers and coastal land seismometers for scanning. Detections that are found by the scan are similar to the template waveforms based upon a predefined threshold. Detections are then visually examined to determine if an event is present. The presence of repeating event clusters can indicate persistent seismic patches, likely corresponding to

  8. Ground-Motion Simulations of Scenario Earthquakes on the Hayward Fault

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Graves, R; Larsen, S; Ma, S; Rodgers, A; Ponce, D; Schwartz, D; Simpson, R; Graymer, R

    2009-03-09

    We compute ground motions in the San Francisco Bay area for 35 Mw 6.7-7.2 scenario earthquake ruptures involving the Hayward fault. The modeled scenarios vary in rupture length, hypocenter, slip distribution, rupture speed, and rise time. This collaborative effort involves five modeling groups, using different wave propagation codes and domains of various sizes and resolutions, computing long-period (T > 1-2 s) or broadband (T > 0.1 s) synthetic ground motions for overlapping subsets of the suite of scenarios. The simulations incorporate 3-D geologic structure and illustrate the dramatic increase in intensity of shaking for Mw 7.05 ruptures of the entire Hayward fault compared with Mw 6.76 ruptures of the southern two-thirds of the fault. The area subjected to shaking stronger than MMI VII increases from about 10% of the San Francisco Bay urban area in the Mw 6.76 events to more than 40% of the urban area for the Mw 7.05 events. Similarly, combined rupture of the Hayward and Rodgers Creek faults in a Mw 7.2 event extends shaking stronger than MMI VII to nearly 50% of the urban area. For a given rupture length, the synthetic ground motions exhibit the greatest sensitivity to the slip distribution and location inside or near the edge of sedimentary basins. The hypocenter also exerts a strong influence on the amplitude of the shaking due to rupture directivity. The synthetic waveforms exhibit a weaker sensitivity to the rupture speed and are relatively insensitive to the rise time. The ground motions from the simulations are generally consistent with Next Generation Attenuation ground-motion prediction models but contain long-period effects, such as rupture directivity and amplification in shallow sedimentary basins that are not fully captured by the ground-motion prediction models.

  9. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available these events and their effects can enable researchers to identify these threats and find ways to counter them. In this paper we present the design of a network simulation platform which can enable researchers to study dynamic behaviour of networks, network...

  10. Earthquake Monitoring with the MyShake Global Smartphone Seismic Network

    Science.gov (United States)

    Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.

    2017-12-01

    Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located 20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.

  11. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  12. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  13. Network simulation of nonstationary ionic transport through liquid junctions

    International Nuclear Information System (INIS)

    Castilla, J.; Horno, J.

    1993-01-01

    Nonstationary ionic transport across the liquid junctions has been studied using Network Thermodynamics. A network model for the time-dependent Nernst-Plack-Poisson system of equation is proposed. With this network model and the electrical circuit simulation program PSPICE, the concentrations, charge density, and electrical potentials, at short times, have been simulated for the binary system NaCl/NaCl. (Author) 13 refs

  14. Estimation of slip scenarios of mega-thrust earthquakes and strong motion simulations for Central Andes, Peru

    Science.gov (United States)

    Pulido, N.; Tavera, H.; Aguilar, Z.; Chlieh, M.; Calderon, D.; Sekiguchi, T.; Nakai, S.; Yamazaki, F.

    2012-12-01

    We have developed a methodology for the estimation of slip scenarios for megathrust earthquakes based on a model of interseismic coupling (ISC) distribution in subduction margins obtained from geodetic data, as well as information of recurrence of historical earthquakes. This geodetic slip model (GSM) delineates the long wavelength asperities within the megathrust. For the simulation of strong ground motion it becomes necessary to introduce short wavelength heterogeneities to the source slip to be able to efficiently simulate high frequency ground motions. To achieve this purpose we elaborate "broadband" source models constructed by combining the GSM with several short wavelength slip distributions obtained from a Von Karman PSD function with random phases. Our application of the method to Central Andes in Peru, show that this region has presently the potential of generating an earthquake with moment magnitude of 8.9, with a peak slip of 17 m and a source area of approximately 500 km along strike and 165 km along dip. For the strong motion simulations we constructed 12 broadband slip models, and consider 9 possible hypocenter locations for each model. We performed strong motion simulations for the whole central Andes region (Peru), spanning an area from the Nazca ridge (16^o S) to the Mendana fracture (9^o S). For this purpose we use the hybrid strong motion simulation method of Pulido et al. (2004), improved to handle a general slip distribution. Our simulated PGA and PGV distributions indicate that a region of at least 500 km along the coast of central Andes is subjected to a MMI intensity of approximately 8, for the slip model that yielded the largest ground motions among the 12 slip models considered, averaged for all assumed hypocenter locations. This result is in agreement with the macroseismic intensity distribution estimated for the great 1746 earthquake (M~9) in central Andes (Dorbath et al. 1990). Our results indicate that the simulated PGA and PGV for

  15. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo; Artina, Marco; Foransier, Massimo; Markowich, Peter A.

    2015-01-01

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation

  16. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  17. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    Science.gov (United States)

    Yu, E.; Bhaskaran, A.; Chen, S.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2010-12-01

    Currently the SCEDC archives continuous and triggered data from nearly 5000 data channels from 425 SCSN recorded stations, processing and archiving an average of 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC in the past year. Updated hardware: ● The SCEDC has more than doubled its waveform file storage capacity by migrating to 2 TB disks. New data holdings: ● Waveform data: Beginning Jan 1, 2010 the SCEDC began continuously archiving all high-sample-rate strong-motion channels. All seismic channels recorded by SCSN are now continuously archived and available at SCEDC. ● Portable data from El Mayor Cucapah 7.2 sequence: Seismic waveforms from portable stations installed by researchers (contributed by Elizabeth Cochran, Jamie Steidl, and Octavio Lazaro-Mancilla) have been added to the archive and are accessible through STP either as continuous data or associated with events in the SCEDC earthquake catalog. This additional data will help SCSN analysts and researchers improve event locations from the sequence. ● Real time GPS solutions from El Mayor Cucapah 7.2 event: Three component 1Hz seismograms of California Real Time Network (CRTN) GPS stations, from the April 4, 2010, magnitude 7.2 El Mayor-Cucapah earthquake are available in SAC format at the SCEDC. These time series were created by Brendan Crowell, Yehuda Bock, the project PI, and Mindy Squibb at SOPAC using data from the CRTN. The El Mayor-Cucapah earthquake demonstrated definitively the power of real-time high-rate GPS data: they measure dynamic displacements directly, they do not clip and they are also able to detect the permanent (coseismic) surface deformation. ● Triggered data from the Quake Catcher Network (QCN) and Community Seismic Network (CSN): The SCEDC in

  18. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    International Nuclear Information System (INIS)

    AllamehZadeh, Mostafa

    2011-01-01

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0–6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  19. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com [International Institute of Earthquake Engineering and Seismology (Iran, Islamic Republic of)

    2011-12-15

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  20. Ground motion modeling of Hayward fault scenario earthquakes II:Simulation of long-period and broadband ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C

    2009-11-04

    We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.

  1. BioNSi: A Discrete Biological Network Simulator Tool.

    Science.gov (United States)

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  2. A dynamic model of liquid containers (tanks) with legs and probability analysis of response to simulated earthquake

    International Nuclear Information System (INIS)

    Fujita, Takafumi; Shimosaka, Haruo

    1980-01-01

    This paper is described on the results of analysis of the response of liquid containers (tanks) to earthquakes. Sine wave oscillation was applied experimentally to model tanks with legs. A model with one degree of freedom is good enough for the analysis. To investigate the reason of this fact, the response multiplication factor of tank displacement was analysed. The shapes of the model tanks were rectangular and cylindrical. Analyses were made by a potential theory. The experimental studies show that the characteristics of attenuation of oscillation was non-linear. The model analysis of this non-linear attenuation was also performed. Good agreement between the experimental and the analytical results was recognized. The probability analysis of the response to earthquake with simulated shock waves was performed, using the above mentioned model, and good agreement between the experiment and the analysis was obtained. (Kato, T.)

  3. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  4. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  5. 3-D dynamic rupture simulations of the 2016 Kumamoto, Japan, earthquake

    Science.gov (United States)

    Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi; Kubo, Hisahiko

    2017-11-01

    Using 3-D dynamic rupture simulations, we investigated the 2016 Mw7.1 Kumamoto, Japan, earthquake to elucidate why and how the rupture of the main shock propagated successfully, assuming a complicated fault geometry estimated on the basis of the distributions of the aftershocks. The Mw7.1 main shock occurred along the Futagawa and Hinagu faults. Within 28 h before the main shock, three M6-class foreshocks occurred. Their hypocenters were located along the Hinagu and Futagawa faults, and their focal mechanisms were similar to that of the main shock. Therefore, an extensive stress shadow should have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of the relocated aftershock hypocenters. We then evaluated the static stress changes on the main shock fault plane that were due to the occurrence of the three foreshocks, assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that Coulomb failure stress change (ΔCFS) was positive just below the hypocenter of the main shock, while the ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the propagation of the rupture toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes caused by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we computed 3-D dynamic rupture by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges that could reproduce the characteristic features of the main shock rupture revealed by seismic waveform analyses. We also

  6. Tsunami simulations of mega-thrust earthquakes in the Nankai–Tonankai Trough (Japan) based on stochastic rupture scenarios

    KAUST Repository

    Goda, Katsuichiro; Yasuda, Tomohiro; Mai, Paul Martin; Maruyama, Takuma; Mori, Nobuhito

    2017-01-01

    In this study, earthquake rupture models for future mega-thrust earthquakes in the Nankai–Tonankai subduction zone are developed by incorporating the main characteristics of inverted source models of the 2011 Tohoku earthquake. These scenario

  7. Estimation of peak ground accelerations for Mexican subduction zone earthquakes using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Silvia R; Romo, Miguel P; Mayoral, Juan M [Instituto de Ingenieria, Universidad Nacional Autonoma de Mexico, Mexico D.F. (Mexico)

    2007-01-15

    An extensive analysis of the strong ground motion Mexican data base was conducted using Soft Computing (SC) techniques. A Neural Network NN is used to estimate both orthogonal components of the horizontal (PGAh) and vertical (PGAv) peak ground accelerations measured at rock sites during Mexican subduction zone earthquakes. The work discusses the development, training, and testing of this neural model. Attenuation phenomenon was characterized in terms of magnitude, epicentral distance and focal depth. Neural approximators were used instead of traditional regression techniques due to their flexibility to deal with uncertainty and noise. NN predictions follow closely measured responses exhibiting forecasting capabilities better than those of most established attenuation relations for the Mexican subduction zone. Assessment of the NN, was also applied to subduction zones in Japan and North America. For the database used in this paper the NN and the-better-fitted- regression approach residuals are compared. [Spanish] Un analisis exhaustivo de la base de datos mexicana de sismos fuertes se llevo a cabo utilizando tecnicas de computo aproximado, SC (soft computing). En particular, una red neuronal, NN, es utilizada para estimar ambos componentes ortogonales de la maxima aceleracion horizontal del terreno, PGAh, y la vertical, PGAv, medidas en sitios en roca durante terremotos generados en la zona de subduccion de la Republica Mexicana. El trabajo discute el desarrollo, entrenamiento, y prueba de este modelo neuronal. El fenomeno de atenuacion fue caracterizado en terminos de la magnitud, la distancia epicentral y la profundidad focal. Aproximaciones neuronales fueron utilizadas en lugar de tecnicas de regresion tradicionales por su flexibilidad para tratar con incertidumbre y ruido en los datos. La NN sigue de cerca la respuesta medida exhibiendo capacidades predictivas mejores que las mostradas por muchas de las relaciones de atenuacion establecidas para la zona de

  8. How geometry and structure control the seismic radiation : spectral element simulation of the dynamic rupture of the Mw 9.0 Tohoku earthquake

    Science.gov (United States)

    Festa, G.; Vilotte, J.; Scala, A.

    2012-12-01

    The M 9.0, 2011 Tohoku earthquake, along the North American-Pacific plate boundary, East of the Honshu Island, yielded a complex broadband rupture extending southwards over 600 km along strike and triggering a large tsunami that ravaged the East coast of North Japan. Strong motion and high-rate continuous GPS data, recorded all along the Japanese archipelago by the national seismic networks K-Net and Kik-net and geodetic network Geonet, together with teleseismic data, indicated a complex frequency dependent rupture. Low frequency signals (fmeters), extending along-dip over about 100 km, between the hypocenter and the trench, and 150 to 200 km along strike. This slip asperity was likely the cause of the localized tsunami source and of the large amplitude tsunami waves. High-frequency signals (f>0.5 Hz) were instead generated close to the coast in the deeper part of the subduction zone, by at least four smaller size asperities, with possible repeated slip, and were mostly the cause for the ground shaking felt in the Eastern part of Japan. The deep origin of the high-frequency radiation was also confirmed by teleseismic high frequency back projection analysis. Intermediate frequency analysis showed a transition between the shallow and deeper part of the fault, with the rupture almost confined in a small stripe containing the hypocenter before propagating southward along the strike, indicating a predominant in-plane rupture mechanism in the initial stage of the rupture itself. We numerically investigate the role of the geometry of the subduction interface and of the structural properties of the subduction zone on the broadband dynamic rupture and radiation of the Tohoku earthquake. Based upon the almost in-plane behavior of the rupture in its initial stage, 2D non-smooth spectral element dynamic simulations of the earthquake rupture propagation are performed including the non planar and kink geometry of the subduction interface, together with bi-material interfaces

  9. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  10. Large scale earthquake simulator of 3-D (simultaneous X-Y-Z direction)

    International Nuclear Information System (INIS)

    Shiraki, Kazuhiro; Inoue, Masao

    1983-01-01

    Japan is the country where earthquakes are frequent, accordingly it is necessary to examine sufficiently the safety against earthquakes of important machinery and equipment such as nuclear and thermal power plants and chemical plants. For this purpose, aseismatic safety is evaluated by mounting an actual thing or a model on a vibration table and vibrating it by the magnitude several times as large as actual earthquakes. The vibration tables used so far can vibrate only in one direction or in two directions simultaneously, but this time, a three-dimensional vibration table was completed, which can vibrate in three directions simultaneously with arbitrary wave forms, respectively. By the advent of this vibration table, aseismatic test can be carried out, using the earthquake waves close to actual ones. It is expected that this vibration table achieves large role for the improvement of aseismatic reliability of nuclear power machinery and equipment. When a large test body is vibrated on the vibration table, the center of gravity of the test body and the point of action of vibrating force are different, therefore the rotating motion around three axes is added to the motion in three axial directions, and these motions must be controlled so as to realize three-dimensional earthquake motion. The main particulars and the construction of the vibration table, the mechanism of three-direction vibration, the control of the table and the results of test of the table are reported. (Kako, I.)

  11. Co-Seismic Effect of the 2011 Japan Earthquake on the Crustal Movement Observation Network of China

    Directory of Open Access Journals (Sweden)

    Shaomin Yang

    2013-01-01

    Full Text Available Great earthquakes introduce measurable co-seismic displacements over regions of hundreds and thousands of kilometers in width, which, if not accounted for, may significantly bias the long-term surface velocity field constrained by GPS observations performed during a period encompassing that event. Here, we first present an estimation of the far-field co-seismic off-sets associated with the 2011 Japan Mw 9.0 earthquake using GPS measurements from the Crustal Movement Observation Network of China (CMONOC in North China. The uncertainties of co-seismic off-set, either at cGPS stations or at campaign sites, are better than 5 - 6 mm on average. We compare three methods to constrain the co-seismic off-sets at the campaign sites in northeastern China 1 interpolating cGPS coseismic offsets, 2 estimating in terms of sparsely sampled time-series, and 3 predicting by using a well-constrained slip model. We show that the interpolation of cGPS co-seismic off-sets onto the campaign sites yield the best co-seismic off-set solution for these sites. The source model gives a consistent prediction based on finite dislocation in a layered spherical Earth, which agrees with the best prediction with discrepancies of 2 - 10 mm for 32 campaign sites. Thus, the co-seismic off-set model prediction is still a reasonable choice if a good coverage cGPS network is not available for a very active region like the Tibetan Plateau in which numerous campaign GPS sites were displaced by the recent large earthquakes.

  12. ASSESSING URBAN STREETS NETWORK VULNERABILITY AGAINST EARTHQUAKE USING GIS – CASE STUDY: 6TH ZONE OF TEHRAN

    Directory of Open Access Journals (Sweden)

    A. Rastegar

    2017-09-01

    Full Text Available Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population and 20 km2 area (3.2% of city area, is one of the main municipal zones of Tehran (Iran center of statistics, 2006. Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  13. Earthquakes and Volcanic Processes at San Miguel Volcano, El Salvador, Determined from a Small, Temporary Seismic Network

    Science.gov (United States)

    Hernandez, S.; Schiek, C. G.; Zeiler, C. P.; Velasco, A. A.; Hurtado, J. M.

    2008-12-01

    The San Miguel volcano lies within the Central American volcanic chain in eastern El Salvador. The volcano has experienced at least 29 eruptions with Volcano Explosivity Index (VEI) of 2. Since 1970, however, eruptions have decreased in intensity to an average of VEI 1, with the most recent eruption occurring in 2002. Eruptions at San Miguel volcano consist mostly of central vent and phreatic eruptions. A critical challenge related to the explosive nature of this volcano is to understand the relationships between precursory surface deformation, earthquake activity, and volcanic activity. In this project, we seek to determine sub-surface structures within and near the volcano, relate the local deformation to these structures, and better understand the hazard that the volcano presents in the region. To accomplish these goals, we deployed a six station, broadband seismic network around San Miguel volcano in collaboration with researchers from Servicio Nacional de Estudios Territoriales (SNET). This network operated continuously from 23 March 2007 to 15 January 2008 and had a high data recovery rate. The data were processed to determine earthquake locations, magnitudes, and, for some of the larger events, focal mechanisms. We obtained high precision locations using a double-difference approach and identified at least 25 events near the volcano. Ongoing analysis will seek to identify earthquake types (e.g., long period, tectonic, and hybrid events) that occurred in the vicinity of San Miguel volcano. These results will be combined with radar interferometric measurements of surface deformation in order to determine the relationship between surface and subsurface processes at the volcano.

  14. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  15. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  16. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    Science.gov (United States)

    Fojtíková, Lucia; Kristeková, Miriam; Málek, Jiří; Sokos, Efthimios; Csicsay, Kristián; Zahradník, Jiří

    2016-01-01

    Extension of permanent seismic networks is usually governed by a number of technical, economic, logistic, and other factors. Planned upgrade of the network can be justified by theoretical assessment of the network capability in terms of reliable estimation of the key earthquake parameters (e.g., location and focal mechanisms). It could be useful not only for scientific purposes but also as a concrete proof during the process of acquisition of the funding needed for upgrade and operation of the network. Moreover, the theoretical assessment can also identify the configuration where no improvement can be achieved with additional stations, establishing a tradeoff between the improvement and additional expenses. This paper presents suggestion of a combination of suitable methods and their application to the Little Carpathians local seismic network (Slovakia, Central Europe) monitoring epicentral zone important from the point of seismic hazard. Three configurations of the network are considered: 13 stations existing before 2011, 3 stations already added in 2011, and 7 new planned stations. Theoretical errors of the relative location are estimated by a new method, specifically developed in this paper. The resolvability of focal mechanisms determined by waveform inversion is analyzed by a recent approach based on 6D moment-tensor error ellipsoids. We consider potential seismic events situated anywhere in the studied region, thus enabling "mapping" of the expected errors. Results clearly demonstrate that the network extension remarkably decreases the errors, mainly in the planned 23-station configuration. The already made three-station extension of the network in 2011 allowed for a few real data examples. Free software made available by the authors enables similar application in any other existing or planned networks.

  17. Hardware-software system for simulating and analyzing earthquakes applied to civil structures

    Directory of Open Access Journals (Sweden)

    J. P. Amezquita-Sanchez

    2012-01-01

    Full Text Available The occurrence of recent strong earthquakes, the incessant worldwide movements of tectonic plates and the continuous ambient vibrations caused by traffic and wind have increased the interest of researchers in improving the capacity of energy dissipation to avoid damages to civil structures. Experimental testing of structural systems is essential for the understanding of physical behaviors and the building of appropriate analytic models in order to expose difficulties that may not have been considered in analytical studies. This paper presents a hardware-software system for exciting, monitoring and analyzing simultaneously a structure under earthquake signals and other types of signals in real-time. Effectiveness of the proposed system has been validated by experimental case studies and has been found to be a useful tool in the analysis of earthquake effects on structures.

  18. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  19. Ranking important nodes in complex networks by simulated annealing

    International Nuclear Information System (INIS)

    Sun Yu; Yao Pei-Yang; Shen Jian; Zhong Yun; Wan Lu-Jun

    2017-01-01

    In this paper, based on simulated annealing a new method to rank important nodes in complex networks is presented. First, the concept of an importance sequence (IS) to describe the relative importance of nodes in complex networks is defined. Then, a measure used to evaluate the reasonability of an IS is designed. By comparing an IS and the measure of its reasonability to a state of complex networks and the energy of the state, respectively, the method finds the ground state of complex networks by simulated annealing. In other words, the method can construct a most reasonable IS. The results of experiments on real and artificial networks show that this ranking method not only is effective but also can be applied to different kinds of complex networks. (paper)

  20. Design and investigation of a continuous radon monitoring network for earthquake precursory process in Great Tehran

    International Nuclear Information System (INIS)

    Negarestani, A.; Namvaran, M.; Hashemi, S.M.; Shahpasandzadeh, M.; Fatemi, S.J.; Alavi, S.A.; Mokhtari, M.

    2014-01-01

    Earthquakes usually occur after some preliminary anomalies in the physical and chemical characteristics of environment and earth interior. Construction of the models which can explain these anomalies, prompt scientists to monitor geophysical and geochemical characteristics in the seismic areas for earthquake prediction. A review of studies has been done so far, denoted that radon gas shows more sensitivity than other geo-gas as a precursor. Based on previous researches, radon is a short-term precursor of earthquake from time point of view. There are equal experimental equations about the relation between earthquake magnitude and its effective distance on radon concentration variations. In this work, an algorithm based on Dobrovolsky equation (D=10 0.43M ) with defining the Expectation and Investigation circles for great Tehran has been used. Radon concentration was measured with RAD7 detector in the more than 40 springs. Concentration of radon in spring, spring discharge, water temperature and the closeness of spring location to active faults, have been considered as the significant factors to select the best spring for a radon continuous monitoring site implementation. According to these factors, thirteen springs have been selected as follow: Bayjan, Mahallat-Hotel, Avaj, Aala, Larijan, Delir, Lavij, Ramsar, Semnan, Lavieh, Legahi, Kooteh-Koomeh and Sarein. (author)

  1. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  2. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  3. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  4. Efficient Allocation of Resources for Defense of Spatially Distributed Networks Using Agent-Based Simulation.

    Science.gov (United States)

    Kroshl, William M; Sarkani, Shahram; Mazzuchi, Thomas A

    2015-09-01

    This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent-based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg "leader follower" game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent-based simulation. The evolutionary agent-based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent-based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent-based approach results in a greater percentage of defender victories than does the PRA-based approach. © 2015 Society for Risk Analysis.

  5. Mass-spring model used to simulate the sloshing of fluid in the container under the earthquake

    International Nuclear Information System (INIS)

    Wen Jing; Luan Lin; Gao Xiaoan; Wang Wei; Lu Daogang; Zhang Shuangwang

    2005-01-01

    A lumped-mass spring model is given to simulated the sloshing of liquid in the container under the earthquake in the ASCE 4-86. A new mass-spring model is developed in the 3D finite element model instead of beam model in this paper. The stresses corresponding to the sloshing mass could be given directly, which avoids the construction of beam model. This paper presents 3-D Mass-Spring Model for the total overturning moment as well as an example of the model. Moreover the mass-spring models for the overturning moment to the sides and to the bottom of the container are constructed respectively. (authors)

  6. Three-Dimensional Finite Difference Simulation of Ground Motions from the August 24, 2014 South Napa Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Arthur J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of California, Berkeley, CA (United States); Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Pitarka, Arben [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-15

    We performed three-dimensional (3D) anelastic ground motion simulations of the South Napa earthquake to investigate the performance of different finite rupture models and the effects of 3D structure on the observed wavefield. We considered rupture models reported by Dreger et al. (2015), Ji et al., (2015), Wei et al. (2015) and Melgar et al. (2015). We used the SW4 anelastic finite difference code developed at Lawrence Livermore National Laboratory (Petersson and Sjogreen, 2013) and distributed by the Computational Infrastructure for Geodynamics. This code can compute the seismic response for fully 3D sub-surface models, including surface topography and linear anelasticity. We use the 3D geologic/seismic model of the San Francisco Bay Area developed by the United States Geological Survey (Aagaard et al., 2008, 2010). Evaluation of earlier versions of this model indicated that the structure can reproduce main features of observed waveforms from moderate earthquakes (Rodgers et al., 2008; Kim et al., 2010). Simulations were performed for a domain covering local distances (< 25 km) and resolution providing simulated ground motions valid to 1 Hz.

  7. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  8. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    Science.gov (United States)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-03-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  9. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    Science.gov (United States)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-04-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  10. Mesoscopic simulations of crosslinked polymer networks

    NARCIS (Netherlands)

    Megariotis, G.; Vogiatzis, G.G.; Schneider, L.; Müller, M.; Theodorou, D.N.

    2016-01-01

    A new methodology and the corresponding C++ code for mesoscopic simulations of elastomers are presented. The test system, crosslinked ds-1'4-polyisoprene' is simulated with a Brownian Dynamics/kinetic Monte Carlo algorithm as a dense liquid of soft, coarse-grained beads, each representing 5-10 Kuhn

  11. Optimization of blanking process using neural network simulation

    International Nuclear Information System (INIS)

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  12. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  13. Simulating individual-based models of epidemics in hierarchical networks

    NARCIS (Netherlands)

    Quax, R.; Bader, D.A.; Sloot, P.M.A.

    2009-01-01

    Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics

  14. Simulation studies of a wide area health care network.

    Science.gov (United States)

    McDaniel, J. G.

    1994-01-01

    There is an increasing number of efforts to install wide area health care networks. Some of these networks are being built to support several applications over a wide user base consisting primarily of medical practices, hospitals, pharmacies, medical laboratories, payors, and suppliers. Although on-line, multi-media telecommunication is desirable for some purposes such as cardiac monitoring, store-and-forward messaging is adequate for many common, high-volume applications. Laboratory test results and payment claims, for example, can be distributed using electronic messaging networks. Several network prototypes have been constructed to determine the technical problems and to assess the effectiveness of electronic messaging in wide area health care networks. Our project, Health Link, developed prototype software that was able to use the public switched telephone network to exchange messages automatically, reliably and securely. The network could be configured to accommodate the many different traffic patterns and cost constraints of its users. Discrete event simulations were performed on several network models. Canonical star and mesh networks, that were composed of nodes operating at steady state under equal loads, were modeled. Both topologies were found to support the throughput of a generic wide area health care network. The mean message delivery time of the mesh network was found to be less than that of the star network. Further simulations were conducted for a realistic large-scale health care network consisting of 1,553 doctors, 26 hospitals, four medical labs, one provincial lab and one insurer. Two network topologies were investigated: one using predominantly peer-to-peer communication, the other using client-server communication.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7949966

  15. A gene network simulator to assess reverse engineering algorithms.

    Science.gov (United States)

    Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2009-03-01

    In the context of reverse engineering of biological networks, simulators are helpful to test and compare the accuracy of different reverse-engineering approaches in a variety of experimental conditions. A novel gene-network simulator is presented that resembles some of the main features of transcriptional regulatory networks related to topology, interaction among regulators of transcription, and expression dynamics. The simulator generates network topology according to the current knowledge of biological network organization, including scale-free distribution of the connectivity and clustering coefficient independent of the number of nodes in the network. It uses fuzzy logic to represent interactions among the regulators of each gene, integrated with differential equations to generate continuous data, comparable to real data for variety and dynamic complexity. Finally, the simulator accounts for saturation in the response to regulation and transcription activation thresholds and shows robustness to perturbations. It therefore provides a reliable and versatile test bed for reverse engineering algorithms applied to microarray data. Since the simulator describes regulatory interactions and expression dynamics as two distinct, although interconnected aspects of regulation, it can also be used to test reverse engineering approaches that use both microarray and protein-protein interaction data in the process of learning. A first software release is available at http://www.dei.unipd.it/~dicamill/software/netsim as an R programming language package.

  16. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  17. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  18. 3-D Dynamic rupture simulation for the 2016 Kumamoto, Japan, earthquake sequence: Foreshocks and M6 dynamically triggered event

    Science.gov (United States)

    Ando, R.; Aoki, Y.; Uchide, T.; Imanishi, K.; Matsumoto, S.; Nishimura, T.

    2016-12-01

    A couple of interesting earthquake rupture phenomena were observed associated with the sequence of the 2016 Kumamoto, Japan, earthquake sequence. The sequence includes the April 15, 2016, Mw 7.0, mainshock, which was preceded by multiple M6-class foreshock. The mainshock mainly broke the Futagawa fault segment striking NE-SW direction extending over 50km, and it further triggered a M6-class earthquake beyond the distance more than 50km to the northeast (Uchide et al., 2016, submitted), where an active volcano is situated. Compiling the data of seismic analysis and InSAR, we presumed this dynamic triggering event occurred on an active fault known as Yufuin fault (Ando et al., 2016, JPGU general assembly). It is also reported that the coseismic slip was significantly large at a shallow portion of Futagawa Fault near Aso volcano. Since the seismogenic depth becomes significantly shallower in these two areas, we presume the geothermal anomaly play a role as well as the elasto-dynamic processes associated with the coseismic rupture. In this study, we conducted a set of fully dynamic simulations of the earthquake rupture process by assuming the inferred 3D fault geometry and the regional stress field obtained referring the stress tensor inversion. As a result, we showed that the dynamic rupture process was mainly controlled by the irregularity of the fault geometry subjected to the gently varying regional stress field. The foreshocks ruptures have been arrested at the juncture of the branch faults. We also show that the dynamic triggering of M-6 class earthquakes occurred along the Yufuin fault segment (located 50 km NE) because of the strong stress transient up to a few hundreds of kPa due to the rupture directivity effect of the M-7 event. It is also shown that the geothermal condition may lead to the susceptible condition of the dynamic triggering by considering the plastic shear zone on the down dip extension of the Yufuin segment, situated in the vicinity of an

  19. FDM simulation of earthquakes off western Kyushu, Japan, using a land-ocean unified 3D structure model

    Science.gov (United States)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Hara, Tatsuhiko

    2017-07-01

    Seismic activity occurred off western Kyushu, Japan, at the northern end of the Okinawa Trough on May 6, 2016 (14:11 JST), 22 days after the onset of the 2016 Kumamoto earthquake sequence. The area is adjacent to the Beppu-Shimabara graben where the 2016 Kumamoto earthquake sequence occurred. In the area off western Kyushu, a M7.1 earthquake also occurred on November 14, 2015 (5:51 JST), and a tsunami with a height of 0.3 m was observed. In order to better understand these seismic activity and tsunamis, it is necessary to study the sources of, and strong motions due to, earthquakes in the area off western Kyushu. For such studies, validation of synthetic waveforms is important because of the presence of the oceanic water layer and thick sediments in the source area. We show the validation results for synthetic waveforms through nonlinear inversion analyses of small earthquakes ( M5). We use a land-ocean unified 3D structure model, 3D HOT finite-difference method ("HOT" stands for Heterogeneity, Ocean layer and Topography) and a multi-graphic processing unit (GPU) acceleration to simulate the wave propagations. We estimate the first-motion augmented moment tensor (FAMT) solution based on both the long-period surface waves and short-period body waves. The FAMT solutions systematically shift landward by about 13 km, on average, from the epicenters determined by the Japan Meteorological Agency. The synthetics provide good reproductions of the observed full waveforms with periods of 10 s or longer. On the other hand, for waveforms with shorter periods (down to 4 s), the later surface waves are not reproduced well, while the first parts of the waveforms (comprising P- and S-waves) are reproduced to some extent. These results indicate that the current 3D structure model around Kyushu is effective for generating full waveforms, including surface waves with periods of about 10 s or longer. Based on these findings, we analyze the 2015 M7.1 event using the cross

  20. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  1. Recorded earthquake responses from the integrated seismic monitoring network of the Atwood Building, Anchorage, Alaska

    Science.gov (United States)

    Celebi, M.

    2006-01-01

    An integrated seismic monitoring system with a total of 53 channels of accelerometers is now operating in and at the nearby free-field site of the 20-story steel-framed Atwood Building in highly seismic Anchorage, Alaska. The building has a single-story basement and a reinforced concrete foundation without piles. The monitoring system comprises a 32-channel structural array and a 21-channel site array. Accelerometers are deployed on 10 levels of the building to assess translational, torsional, and rocking motions, interstory drift (displacement) between selected pairs of adjacent floors, and average drift between floors. The site array, located approximately a city block from the building, comprises seven triaxial accelerometers, one at the surface and six in boreholes ranging in depths from 15 to 200 feet (???5-60 meters). The arrays have already recorded low-amplitude shaking responses of the building and the site caused by numerous earthquakes at distances ranging from tens to a couple of hundred kilometers. Data from an earthquake that occurred 186 km away traces the propagation of waves from the deepest borehole to the roof of the building in approximately 0.5 seconds. Fundamental structural frequencies [0.58 Hz (NS) and 0.47 Hz (EW)], low damping percentages (2-4%), mode coupling, and beating effects are identified. The fundamental site frequency at approximately 1.5 Hz is close to the second modal frequencies (1.83 Hz NS and 1.43 EW) of the building, which may cause resonance of the building. Additional earthquakes prove repeatability of these characteristics; however, stronger shaking may alter these conclusions. ?? 2006, Earthquake Engineering Research Institute.

  2. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  3. Modeling and Simulation Network Data Standards

    Science.gov (United States)

    2011-09-30

    approaches . 2.3. JNAT. JNAT is a Web application that provides connectivity and network analysis capability. JNAT uses propagation models and low-fidelity...COMBATXXI Movement Logger Data Output Dictionary. Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal...B-8 Field # Geocentric Coordinates (GCC) Heading Geodetic Coordinates (GDC) Heading Universal Transverse Mercator (UTM) Heading

  4. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  5. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  6. STUDY ON SUPPORTING FOR DRAWING UP THE BCP FOR URBAN EXPRESSWAY NETWORK USING BY TRAFFIC SIMULATION SYSTEM

    Science.gov (United States)

    Yamawaki, Masashi; Shiraki, Wataru; Inomo, Hitoshi; Yasuda, Keiichi

    The urban expressway network is an important infrastructure to execute a disaster restoration. Therefore, it is necessary to draw up the BCP (Business Continuity Plan) to enable securing of road user's safety and restoration of facilities, etc. It is important that each urban expressway manager execute decision and improvement of effective BCP countermeasures when disaster occurs by assuming various disaster situations. Then, in this study, we develop the traffic simulation system that can reproduce various disaster situations and traffic actions, and examine some methods supporting for drawing up the BCP for an urban expressway network. For disaster outside assumption such as tsunami generated by a huge earthquake, we examine some approaches securing safety of users and cars on the Hanshin Expressway Network as well as on general roads. And, we aim to propose a tsunami countermeasure not considered in the current urban expressway BCP.

  7. Numerical simulations of earthquake effects on tunnels for generic nuclear waste repositories

    International Nuclear Information System (INIS)

    Wahi, K.K.; Trent, B.C.; Maxwell, D.E.; Pyke, R.M.; Young, C.; Ross-Brown, D.M.

    1980-12-01

    The objectives of this generic study were to use numerical modeling techniques to determine under what conditions seismic waves generated by an earthquake might cause instability to an underground opening, or cause fracturing and joint movement that would lead to an increase in the permeability of the rock mass. Three different rock types (salt, granite, and shale) were considered as host media for the repository located at a depth of 600 meters. Special material models were developed to account for the nonlinear material behavior of each rock type. The sensitivity analysis included variations in the in situ stress ratio, joint geometry, pore pressures, and the presence or absence of a fault. Three different sets of earthquake motions were used to excite the rock mass. The calculations were performed using the STEALTH codes in a three-stage process. It was concluded that the methodology is suitable for studying the effects of earthquakes on underground openings. In general, the study showed that moderate earthquakes (up to 0.41 g) did not cause instability of the tunnel or major fracturing of the rock mass. A rock-burst tremor with accelerations up to 0.95 g, however, was found to be amplified around the tunnel, and fracturing occurred as a result of the seismic loading in salt and granite. In shale, even moderate seismic loading resulted in tunnel collapse. Other questions appraised in the study include the stability of granite tunnels under various combinations of joint geometry and in situ stress states, and the overall stability of tunnels in shale subject to the thermomechanical loading conditions anticipated in an underground waste repository

  8. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  9. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: 1) Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. 2) Amongst numerous training algorithms, only the Recursive Prediction Error Method using...

  10. Reactor experiments, workshops, and human resource development education simulating the Great East Japan Earthquake

    International Nuclear Information System (INIS)

    Horiguchi, Tetsuo; Yamamoto, Tomosada

    2012-01-01

    Kinki University Atomic Energy Research Institute has been implementing a social education program such as reactor experiments and training sessions for junior and senior high school teachers since 1987, and in recent years, it has been implementing an education program for common citizens. However, the Great East Japan Earthquake has made it necessary to consider not only the dissemination of accurate knowledge, but also responding to the anxiety on nuclear power. This paper explains the contents of the social contribution activities and workshops conducted at Kinki University Atomic Energy Research Institute, after the Great East Japan Earthquake and the Fukushima Daiichi Nuclear Power Station accident. As the activities that are carried out in addition to training sessions, it introduces the implementation state of telephone consultation about nuclear power, and earthquake reconstruction assistance advisory at Kawamata Town, Date-gun, Fukushima Prefecture. As workshop support, it reports human resource development education in the nuclear field at the university, activities at the workshops for junior/senior high school teachers and general public, and questionnaire survey at the time of the workshops. (A.O.)

  11. Artificial neural network simulation of battery performance

    Energy Technology Data Exchange (ETDEWEB)

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  12. Dynamic Interactions for Network Visualization and Simulation

    Science.gov (United States)

    2009-03-01

    projects.htm, Site accessed January 5, 2009. 12. John S. Weir, Major, USAF, Mediated User-Simulator Interactive Command with Visualization ( MUSIC -V). Master’s...Computing Sciences in Colleges, December 2005). 14. Enrique Campos -Nanez, “nscript user manual,” Department of System Engineer- ing University of

  13. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  14. Promoting Simulation Globally: Networking with Nursing Colleagues Across Five Continents.

    Science.gov (United States)

    Alfes, Celeste M; Madigan, Elizabeth A

    Simulation education is gaining momentum internationally and may provide the opportunity to enhance clinical education while disseminating evidence-based practice standards for clinical simulation and learning. There is a need to develop a cohesive leadership group that fosters support, networking, and sharing of simulation resources globally. The Frances Payne Bolton School of Nursing at Case Western Reserve University has had the unique opportunity to establish academic exchange programs with schools of nursing across five continents. Although the joint and mutual simulation activities have been extensive, each international collaboration has also provided insight into the innovations developed by global partners.

  15. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  16. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  17. Nucleation and arrest of slow slip earthquakes: mechanisms and nonlinear simulations using realistic fault geometries and heterogeneous medium properties

    Science.gov (United States)

    Alves da Silva Junior, J.; Frank, W.; Campillo, M.; Juanes, R.

    2017-12-01

    Current models for slow slip earthquakes (SSE) assume a simplified fault embedded on a homogeneous half-space. In these models SSE events nucleate on the transition from velocity strengthening (VS) to velocity weakening (VW) down dip from the trench and propagate towards the base of the seismogenic zone, where high normal effective stress is assumed to arrest slip. Here, we investigate SSE nucleation and arrest using quasi-static finite element simulations, with rate and state friction, on a domain with heterogeneous properties and realistic fault geometry. We use the fault geometry of the Guerrero Gap in the Cocos subduction zone, where SSE events occurs every 4 years, as a proxy for subduction zone. Our model is calibrated using surface displacements from GPS observations. We apply boundary conditions according to the plate convergence rate and impose a depth-dependent pore pressure on the fault. Our simulations indicate that the fault geometry and elastic properties of the medium play a key role in the arrest of SSE events at the base of the seismogenic zone. SSE arrest occurs due to aseismic deformations of the domain that result in areas with elevated effective stress. SSE nucleation occurs in the transition from VS to VW and propagates as a crack-like expansion with increased nucleation length prior to dynamic instability. Our simulations encompassing multiple seismic cycles indicate SSE interval times between 1 and 10 years and, importantly, a systematic increase of rupture area prior to dynamic instability, followed by a hiatus in the SSE occurrence. We hypothesize that these SSE characteristics, if confirmed by GPS observations in different subduction zones, can add to the understanding of nucleation of large earthquakes in the seismogenic zone.

  18. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  19. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  20. Simulation Of Wireless Networked Control System Using TRUETIME And MATLAB

    Directory of Open Access Journals (Sweden)

    Nyan Phyo Aung

    2015-08-01

    Full Text Available Wireless networked control systems WNCS are attracting an increasing research interests in the past decade. Wireless networked control system WNCS is composed of a group of distributed sensors and actuators that communicate through wireless link which achieves distributed sensing and executing tasks. This is particularly relevant for the areas of communication control and computing where successful design of WNCS brings about new challenges to the researchers. The primary motivation of this survey paper is to examine the design issues and to provide directions for successful simulation and implementation of WNCS. The paper also as well reviews some simulation tools for such systems.

  1. Simulation of nonlinear random vibrations using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Paez, T.L.; Tucker, S.; O`Gorman, C.

    1997-02-01

    The simulation of mechanical system random vibrations is important in structural dynamics, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks provide a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and random vibration simulation, via component modeling with artificial neural networks, a much simpler problem. A numerical example is presented.

  2. A Mw 6.3 earthquake scenario in the city of Nice (southeast France): ground motion simulations

    Science.gov (United States)

    Salichon, Jérome; Kohrs-Sansorny, Carine; Bertrand, Etienne; Courboulex, Françoise

    2010-07-01

    The southern Alps-Ligurian basin junction is one of the most seismically active zone of the western Europe. A constant microseismicity and moderate size events (3.5 case of an offshore Mw 6.3 earthquake located at the place where two moderate size events (Mw 4.5) occurred recently and where a morphotectonic feature has been detected by a bathymetric survey. We used a stochastic empirical Green’s functions (EGFs) summation method to produce a population of realistic accelerograms on rock and soil sites in the city of Nice. The ground motion simulations are calibrated on a rock site with a set of ground motion prediction equations (GMPEs) in order to estimate a reasonable stress-drop ratio between the February 25th, 2001, Mw 4.5, event taken as an EGF and the target earthquake. Our results show that the combination of the GMPEs and EGF techniques is an interesting tool for site-specific strong ground motion estimation.

  3. Earthquake Cycle Simulations with Rate-and-State Friction and Linear and Nonlinear Viscoelasticity

    Science.gov (United States)

    Allison, K. L.; Dunham, E. M.

    2016-12-01

    We have implemented a parallel code that simultaneously models both rate-and-state friction on a strike-slip fault and off-fault viscoelastic deformation throughout the earthquake cycle in 2D. Because we allow fault slip to evolve with a rate-and-state friction law and do not impose the depth of the brittle-to-ductile transition, we are able to address: the physical processes limiting the depth of large ruptures (with hazard implications); the degree of strain localization with depth; the relative partitioning of fault slip and viscous deformation in the brittle-to-ductile transition zone; and the relative contributions of afterslip and viscous flow to postseismic surface deformation. The method uses a discretization that accommodates variable off-fault material properties, depth-dependent frictional properties, and linear and nonlinear viscoelastic rheologies. All phases of the earthquake cycle are modeled, allowing the model to spontaneously generate earthquakes, and to capture afterslip and postseismic viscous flow. We compare the effects of a linear Maxwell rheology, often used in geodetic models, with those of a nonlinear power law rheology, which laboratory data indicates more accurately represents the lower crust and upper mantle. The viscosity of the Maxwell rheology is set by power law rheological parameters with an assumed a geotherm and strain rate, producing a viscosity that exponentially decays with depth and is constant in time. In contrast, the power law rheology will evolve an effective viscosity that is a function of the temperature profile and the stress state, and therefore varies both spatially and temporally. We will also integrate the energy equation for the thermomechanical problem, capturing frictional heat generation on the fault and off-fault viscous shear heating, and allowing these in turn to alter the effective viscosity.

  4. The Italian Project S2 - Task 4:Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Stupazzini, M.; Smerzini, C.; Cauzzi, C.; Faccioli, E.; Galadini, F.; Gori, S.

    2009-04-01

    Recently the Italian Department of Civil Protection (DPC), in cooperation with Istituto Nazionale di Geofisica e Vulcanologia (INGV) has promoted the 'S2' research project (http://nuovoprogettoesse2.stru.polimi.it/) aimed at the design, testing and application of an open-source code for seismic hazard assessment (SHA). The tool envisaged will likely differ in several important respects from an existing international initiative (Open SHA, Field et al., 2003). In particular, while "the OpenSHA collaboration model envisions scientists developing their own attenuation relationships and earthquake rupture forecasts, which they will deploy and maintain in their own systems", the main purpose of S2 project is to provide a flexible computational tool for SHA, primarily suited for the needs of DPC, which not necessarily are scientific needs. Within S2, a crucial issue is to make alternative approaches available to quantify the ground motion, with emphasis on the near field region. The SHA architecture envisaged will allow for the use of ground motion descriptions other than those yielded by empirical attenuation equations, for instance user generated motions provided by deterministic source and wave propagation simulations. In this contribution, after a brief presentation of Project S2, we intend to illustrate some preliminary 3D scenario simulations performed in the alluvial basin of Sulmona (Central Italy), as an example of the type of descriptions that can be handled in the future SHA architecture. In detail, we selected some seismogenic sources (from the DISS database), believed to be responsible for a number of destructive historical earthquakes, and derive from them a family of simplified geometrical and mechanical source models spanning across a reasonable range of parameters, so that the extent of the main uncertainties can be covered. Then, purely deterministic (for frequencies Journal of Seismology, 1, 237-251. Field, E.H., T.H. Jordan, and C.A. Cornell (2003

  5. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  7. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  8. Simulated annealing for tensor network states

    International Nuclear Information System (INIS)

    Iblisdir, S

    2014-01-01

    Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)

  9. Transforming network simulation data to semantic data for network attack planning

    CSIR Research Space (South Africa)

    Chan, Ke Fai Peter

    2017-03-01

    Full Text Available study was performed, using the Common Open Research Emulator (CORE), to generate the necessary network simulation data. The simulation data was analysed, and then transformed into linked data. The result of the transformation is a data file that adheres...

  10. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  11. Exploring geological and socio-demographic factors associated with under-five mortality in the Wenchuan earthquake using neural network model.

    Science.gov (United States)

    Hu, Yi; Wang, Jinfeng; Li, Xiaohong; Ren, Dan; Driskell, Luke; Zhu, Jun

    2012-01-01

    On 12 May 2008, a devastating earthquake occurred in Sichuan Province, China, taking tens of thousands of lives and destroying the homes of millions of people. Among the large number of dead or missing were children, particularly children aged less than five years old, a fact which drew significant media attention. To obtain relevant information specifically to aid further studies and future preventative measures, a neural network model was proposed to explore some geological and socio-demographic factors associated with earthquake-related child mortality. Sensitivity analysis showed that topographic slope (mean 35.76%), geomorphology (mean 24.18%), earthquake intensity (mean 13.68%), and average income (mean 11%) had great contributions to child mortality. These findings could provide some clues to researchers for further studies and to policy makers in deciding how and where preventive measures and corresponding policies should be implemented in the reconstruction of communities.

  12. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  13. A program PULSYN01 for wide-band simulation of source radiation from a finite earthquake source/fault

    International Nuclear Information System (INIS)

    Gusev, A.A.

    2001-12-01

    The purpose of the program PULSYN01 is to apply a realistic wideband source-side input for calculation of earthquake ground motion. The source is represented as a grid of point subsources, and their seismic moment rate time functions are generated considering each of them as realizations (sample functions) of a non-stationary random process. The model is intended for use at receiver-to fault distances from far field to as small as 10-20% of the fault width. Combined with an adequate Green's function synthesizer, PULSUNT01 can be used for assessment of possible ground motion and seismic hazard in many ways, including scenario event simulation, parametric studies, and eventually stochastic hazard calculations

  14. Numerical simulation for gas-liquid two-phase flow in pipe networks

    International Nuclear Information System (INIS)

    Li Xiaoyan; Kuang Bo; Zhou Guoliang; Xu Jijun

    1998-01-01

    The complex pipe network characters can not directly presented in single phase flow, gas-liquid two phase flow pressure drop and void rate change model. Apply fluid network theory and computer numerical simulation technology to phase flow pipe networks carried out simulate and compute. Simulate result shows that flow resistance distribution is non-linear in two phase pipe network

  15. Dynamic simulation of a steam generator by neural networks

    International Nuclear Information System (INIS)

    Masini, R.; Padovani, E.; Ricotti, M.E.; Zio, E.

    1999-01-01

    Numerical simulation by computers of the dynamic evolution of complex systems and components is a fundamental phase of any modern engineering design activity. This is of particular importance for risk-based design projects which require that the system behavior be analyzed under several and often extreme conditions. The traditional methods of simulation typically entail long, iterative, processes which lead to large simulation times, often exceeding the transients real time. Artificial neural networks (ANNs) may be exploited in this context, their advantages residing mainly in the speed of computation, in the capability of generalizing from few examples, in the robustness to noisy and partially incomplete data and in the capability of performing empirical input-output mapping without complete knowledge of the underlying physics. In this paper we present a novel approach to dynamic simulation by ANNs based on a superposition scheme in which a set of networks are individually trained, each one to respond to a different input forcing function. The dynamic simulation of a steam generator is considered as an example to show the potentialities of this tool and to point out the difficulties and crucial issues which typically arise when attempting to establish an efficient neural network simulator. The structure of the networks system is such to feedback, at each time step, a portion of the past evolution of the transient and this allows a good reproduction of also non-linear dynamic behaviors. A nice characteristic of the approach is that the modularization of the training reduces substantially its burden and gives this neural simulation tool a nice feature of transportability. (orig.)

  16. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  17. Fracture Network Modeling and GoldSim Simulation Support

    OpenAIRE

    杉田 健一郎; Dershowiz, W.

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  18. Networks in disasters: Multidisciplinary communication and coordination in response and recovery to the 2010 Haiti Earthquake (Invited)

    Science.gov (United States)

    McAdoo, B. G.; Augenstein, J.; Comfort, L.; Huggins, L.; Krenitsky, N.; Scheinert, S.; Serrant, T.; Siciliano, M.; Stebbins, S.; Sweeney, P.; University Of Pittsburgh Haiti Reconnaissance Team

    2010-12-01

    The 12 January 2010 earthquake in Haiti demonstrates the necessity of understanding information communication between disciplines during disasters. Armed with data from a variety of sources, from geophysics to construction, water and sanitation to education, decision makers can initiate well-informed policies to reduce the risk from future hazards. At the core of this disaster was a natural hazard that occurred in an environmentally compromised country. The earthquake itself was not solely responsible for the magnitude of the disaster- poor construction practices precipitated by extreme poverty, a two centuries of post-colonial environmental degradation and a history of dysfunctional government shoulder much of the responsibility. Future policies must take into account the geophysical reality that future hazards are inevitable and may occur within the very near future, and how various institutions will respond to the stressors. As the global community comes together in reconstruction efforts, it is necessary for the various actors to take into account what vulnerabilities were exposed by the earthquake, most vividly seen during the initial response to the disaster. Responders are forced to prioritize resources designated for building collapse and infrastructure damage, delivery of critical services such as emergency medical care, and delivery of food and water to those in need. Past disasters have shown that communication lapses between the response and recovery phases results in many of the exposed vulnerabilities not being adequately addressed, and the recovery hence fails to bolster compromised systems. The response reflects the basic characteristics of a Complex Adaptive System, where new agents emerge and priorities within existing organizations shift to deal with new information. To better understand how information is shared between actors during this critical transition, we are documenting how information is communicated between critical sectors during the

  19. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    Science.gov (United States)

    Chen, S. E.; Yu, E.; Bhaskaran, A.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2011-12-01

    Currently, the SCEDC archives continuous and triggered data from nearly 8400 data channels from 425 SCSN recorded stations, processing and archiving an average of 6.4 TB of continuous waveforms and 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC during 2011. New website design: ? The SCEDC has revamped its website. The changes make it easier for users to search the archive, discover updates and new content. These changes also improve our ability to manage and update the site. New data holdings: ? Post processing on El Mayor Cucapah 7.2 sequence continues. To date there have been 11847 events reviewed. Updates are available in the earthquake catalog immediately. ? A double difference catalog (Hauksson et. al 2011) spanning 1981 to 6/30/11 will be available for download at www.data.scec.org and available via STP. ? A focal mechanism catalog determined by Yang et al. 2011 is available for distribution at www.data.scec.org. ? Waveforms from Southern California NetQuake stations are now being stored in the SCEDC archive and available via STP as event associated waveforms. Amplitudes from these stations are also being stored in the archive and used by ShakeMap. ? As part of a NASA/AIST project in collaboration with JPL and SIO, the SCEDC will receive real time 1 sps streams of GPS displacement solutions from the California Real Time Network (http://sopac.ucsd.edu/projects/realtime; Genrich and Bock, 2006, J. Geophys. Res.). These channels will be archived at the SCEDC as miniSEED waveforms, which then can be distributed to the user community via applications such as STP. Improvements in the user tool STP: ? STP sac output now includes picks from the SCSN. New archival methods: ? The SCEDC is exploring the feasibility of archiving and distributing

  20. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  1. Assessment of earthquake-triggered landslide susceptibility in El Salvador based on an Artificial Neural Network model

    Directory of Open Access Journals (Sweden)

    M. J. García-Rodríguez

    2010-06-01

    Full Text Available This paper presents an approach for assessing earthquake-triggered landslide susceptibility using artificial neural networks (ANNs. The computational method used for the training process is a back-propagation learning algorithm. It is applied to El Salvador, one of the most seismically active regions in Central America, where the last severe destructive earthquakes occurred on 13 January 2001 (Mw 7.7 and 13 February 2001 (Mw 6.6. The first one triggered more than 600 landslides (including the most tragic, Las Colinas landslide and killed at least 844 people.

    The ANN is designed and programmed to develop landslide susceptibility analysis techniques at a regional scale. This approach uses an inventory of landslides and different parameters of slope instability: slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness. The information obtained from ANN is then used by a Geographic Information System (GIS to map the landslide susceptibility. In a previous work, a Logistic Regression (LR was analysed with the same parameters considered in the ANN as independent variables and the occurrence or non-occurrence of landslides as dependent variables. As a result, the logistic approach determined the importance of terrain roughness and soil type as key factors within the model. The results of the landslide susceptibility analysis with ANN are checked using landslide location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone. Finally, a comparative analysis of the ANN and LR models are made. The advantages and disadvantages of both approaches are discussed using Receiver Operating Characteristic (ROC curves.

  2. Assessment of earthquake-triggered landslide susceptibility in El Salvador based on an Artificial Neural Network model

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.

    2010-06-01

    This paper presents an approach for assessing earthquake-triggered landslide susceptibility using artificial neural networks (ANNs). The computational method used for the training process is a back-propagation learning algorithm. It is applied to El Salvador, one of the most seismically active regions in Central America, where the last severe destructive earthquakes occurred on 13 January 2001 (Mw 7.7) and 13 February 2001 (Mw 6.6). The first one triggered more than 600 landslides (including the most tragic, Las Colinas landslide) and killed at least 844 people. The ANN is designed and programmed to develop landslide susceptibility analysis techniques at a regional scale. This approach uses an inventory of landslides and different parameters of slope instability: slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness. The information obtained from ANN is then used by a Geographic Information System (GIS) to map the landslide susceptibility. In a previous work, a Logistic Regression (LR) was analysed with the same parameters considered in the ANN as independent variables and the occurrence or non-occurrence of landslides as dependent variables. As a result, the logistic approach determined the importance of terrain roughness and soil type as key factors within the model. The results of the landslide susceptibility analysis with ANN are checked using landslide location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone. Finally, a comparative analysis of the ANN and LR models are made. The advantages and disadvantages of both approaches are discussed using Receiver Operating Characteristic (ROC) curves.

  3. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  4. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  5. Investigation of Ionospheric Anomalies related to moderate Romanian earthquakes occurred during last decade using VLF/LF INFREP and GNSS Global Networks

    Science.gov (United States)

    Moldovan, Iren-Adelina; Oikonomou, Christina; Haralambous, Haris; Nastase, Eduard; Emilian Toader, Victorin; Biagi, Pier Francesco; Colella, Roberto; Toma-Danila, Dragos

    2017-04-01

    Ionospheric TEC (Total Electron Content) variations and Low Frequency (LF) signal amplitude data prior to five moderate earthquakes (Mw≥5) occurred in Romania, in Vrancea crustal and subcrustal seismic zones, during the last decade were analyzed using observations from the Global Navigation Satellite System (GNSS) and the European INFREP (International Network for Frontier Research on Earthquake Precursors) networks respectively, aiming to detect potential ionospheric anomalies related to these events and describe their characteristics. For this, spectral analysis on TEC data and terminator time method on VLF/LF data were applied. It was found that TEC perturbations appeared few days (1-7) up to few hours before the events lasting around 2-3 hours, with periods 20 and 3-5 minutes which could be associated with the impending earthquakes. In addition, in all three events the sunrise terminator times were delayed approximately 20-40 min few days prior and during the earthquake day. Acknowledgments This work was partially supported by the Partnership in Priority Areas Program - PNII, under MEN-UEFISCDI, DARING Project no. 69/2014 and the Nucleu Program - PN 16-35, Project no. 03 01

  6. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  7. Simulation of Attacks for Security in Wireless Sensor Network

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  8. Sensitivity of broad-band ground-motion simulations to earthquake source and Earth structure variations: an application to the Messina Straits (Italy)

    KAUST Repository

    Imperatori, W.; Mai, Paul Martin

    2012-01-01

    We find that ground-motion variability associated to differences in crustal models is constant and becomes important at intermediate and long periods. On the other hand, source-induced ground-motion variability is negligible at long periods and strong at intermediate-short periods. Using our source-modelling approach and the three different 1-D structural models, we investigate shaking levels for the 1908 Mw 7.1 Messina earthquake adopting a recently proposed model for fault geometry and final slip. Our simulations suggest that peak levels in Messina and Reggio Calabria must have reached 0.6-0.7 g during this earthquake.

  9. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  10. Brian: a simulator for spiking neural networks in python.

    Science.gov (United States)

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  11. ESIM_DSN Web-Enabled Distributed Simulation Network

    Science.gov (United States)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  12. Stochastic sensitivity analysis and Langevin simulation for neural network learning

    International Nuclear Information System (INIS)

    Koda, Masato

    1997-01-01

    A comprehensive theoretical framework is proposed for the learning of a class of gradient-type neural networks with an additive Gaussian white noise process. The study is based on stochastic sensitivity analysis techniques, and formal expressions are obtained for stochastic learning laws in terms of functional derivative sensitivity coefficients. The present method, based on Langevin simulation techniques, uses only the internal states of the network and ubiquitous noise to compute the learning information inherent in the stochastic correlation between noise signals and the performance functional. In particular, the method does not require the solution of adjoint equations of the back-propagation type. Thus, the present algorithm has the potential for efficiently learning network weights with significantly fewer computations. Application to an unfolded multi-layered network is described, and the results are compared with those obtained by using a back-propagation method

  13. Hybrid broadband Ground Motion simulation based on a dynamic rupture model of the 2011 Mw 9.0 Tohoku earthquake.

    Science.gov (United States)

    Galvez, P.; Somerville, P.; Bayless, J.; Dalguer, L. A.

    2015-12-01

    The rupture process of the 2011 Tohoku earthquake exhibits depth-dependent variations in the frequency content of seismic radiation from the plate interface. This depth-varying rupture property has also been observed in other subduction zones (Lay et al, 2012). During the Tohoku earthquake, the shallow region radiated coherent low frequency seismic waves whereas the deeper region radiated high frequency waves. Several kinematic inversions (Suzuki et al, 2011; Lee et al, 2011; Bletery et al, 2014; Minson et al, 2014) detected seismic waves below 0.1 Hz coming from the shallow depths that produced slip larger than 40-50 meters close to the trench. Using empirical green functions, Asano & Iwata (2012), Kurahashi and Irikura (2011) and others detected regions of strong ground motion radiation at frequencies up to 10Hz located mainly at the bottom of the plate interface. A recent dynamic model that embodies this depth-dependent radiation using physical models has been developed by Galvez et al (2014, 2015). In this model the rupture process is modeled using a linear weakening friction law with slip reactivation on the shallow region of the plate interface (Galvez et al, 2015). This model reproduces the multiple seismic wave fronts recorded on the Kik-net seismic network along the Japanese coast up to 0.1 Hz as well as the GPS displacements. In the deep region, the rupture sequence is consistent with the sequence of the strong ground motion generation areas (SMGAs) that radiate high frequency ground motion at the bottom of the plate interface (Kurahashi and Irikura, 2013). It remains challenging to perform ground motions fully coupled with a dynamic rupture up to 10 Hz for a megathrust event. Therefore, to generate high frequency ground motions, we make use of the stochastic approach of Graves and Pitarka (2010) but add to the source spectrum the slip rate function of the dynamic model. In this hybrid-dynamic approach, the slip rate function is windowed with Gaussian

  14. Crowd-Sourced Global Earthquake Early Warning

    Science.gov (United States)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  15. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  16. Along-strike Variations in the Himalayas Illuminated by the Aftershock Sequence of the 2015 Mw 7.8 Gorkha Earthquake Using the NAMASTE Local Seismic Network

    Science.gov (United States)

    Mendoza, M.; Ghosh, A.; Karplus, M. S.; Nabelek, J.; Sapkota, S. N.; Adhikari, L. B.; Klemperer, S. L.; Velasco, A. A.

    2016-12-01

    As a result of the 2015 Mw 7.8 Gorkha earthquake, more than 8,000 people were killed from a combination of infrastructure failure and triggered landslides. This earthquake produced 4 m of peak co-seismic slip as the fault ruptured 130 km east under densely populated cities, such as Kathmandu. To understand earthquake dynamics in this part of the Himalayas and help mitigate similar future calamities by the next destructive event, it is imperative to study earthquake activities in detail and improve our understanding of the source and structural complexities. In response to the Gorkha event, multiple institutions developed and deployed a 10-month long dense seismic network called NAMASTE. It blanketed a 27,650 km2 area, mainly covering the rupture area of the Gorkha earthquake, in order to capture the dynamic sequence of aftershock behavior. The network consisted of a mix of 45 broadband, short-period, and strong motion sensors, with an average spacing of 20 km. From the first 6 months of data, starting approximately 1.5 after the mainshock, we develop a robust catalog containing over 3,000 precise earthquake locations, and local magnitudes that range between 0.3 and 4.9. The catalog has a magnitude of completeness of 1.5, and an overall low b-value of 0.78. Using the HypoDD algorithm, we relocate earthquake hypocenters with high precision, and thus illustrate the fault geometry down to depths of 25 km where we infer the location of the gently-dipping Main Frontal Thrust (MFT). Above the MFT, the aftershocks illuminate complex structure produced by relatively steeply dipping faults. Interestingly, we observe sharp along-strike change in the seismicity pattern. The eastern part of the aftershock area is significantly more active than the western part. The change in seismicity may reflect structural and/or frictional lateral heterogeneity in this part of the Himalayan fault system. Such along-strike variations play an important role in rupture complexities and

  17. The MeSO-net (Metropolitan Seismic Observation network) confronts the Pacific Coast of Tohoku Earthquake, Japan (Mw 9.0)

    Science.gov (United States)

    Kasahara, K.; Nakagawa, S.; Sakai, S.; Nanjo, K.; Panayotopoulos, Y.; Morita, Y.; Tsuruoka, H.; Kurashimo, E.; Obara, K.; Hirata, N.; Aketagawa, T.; Kimura, H.

    2011-12-01

    On April 2007, we have launched the special project for earthquake disaster mitigation in the Tokyo Metropolitan area (Fiscal 2007-2011). As a part of this project, construction of the MeSO-net (Metropolitan Seismic Observation network) has been completed, with about 300 stations deployed at mainly elementary and junior-high schools with an interval of about 5 km in space. This results in a highly dense network that covers the metropolitan area. To achieve stable seismic observation with lower surface ground noise, relative to a measurement on the surface, sensors of all stations were installed in boreholes at a depth of about 20m. The sensors have a wide dynamic range (135dB) and a wide frequency band (DC to 80Hz). Data are digitized with 200Hz sampling and telemetered to the Earthquake Research Institute, University of Tokyo. The MeSO-net that can detect and locate most earthquakes with magnitudes above 2.5 provides a unique baseline in scientific and engineering researches on the Tokyo metropolitan area, as follows. One of the main contributions is to greatly improve the image of the Philippine Sea plate (PSP) (Nakagawa et al., 2010) and provides an accurate estimation of the plate boundaries between the PSP and the Pacific plate, allowing us to possibly discuss clear understanding of the relation between the PSP deformation and M7+ intra-slab earthquake generation. Also, the latest version of the plate model in the metropolitan area, proposed by our project, attracts various researchers, comparing with highly-accurate solutions of fault mechanism, repeating earthquakes, etc. Moreover, long-periods ground motions generated by the 2011 earthquake off the Pacific coast of Tohoku earthquake (Mw 9.0) were observed by the MeSO-net and analyzed to obtain the Array Back-Projection Imaging of this event (Honda et al., 2011). As a result, the overall pattern of the imaged asperities coincides well with the slip distribution determined based on other waveform inversion

  18. SIMULATION OF WIRELESS SENSOR NETWORK WITH HYBRID TOPOLOGY

    Directory of Open Access Journals (Sweden)

    J. Jaslin Deva Gifty

    2016-03-01

    Full Text Available The design of low rate Wireless Personal Area Network (WPAN by IEEE 802.15.4 standard has been developed to support lower data rates and low power consuming application. Zigbee Wireless Sensor Network (WSN works on the network and application layer in IEEE 802.15.4. Zigbee network can be configured in star, tree or mesh topology. The performance varies from topology to topology. The performance parameters such as network lifetime, energy consumption, throughput, delay in data delivery and sensor field coverage area varies depending on the network topology. In this paper, designing of hybrid topology by using two possible combinations such as star-tree and star-mesh is simulated to verify the communication reliability. This approach is to combine all the benefits of two network model. The parameters such as jitter, delay and throughput are measured for these scenarios. Further, MAC parameters impact such as beacon order (BO and super frame order (SO for low power consumption and high channel utilization, has been analysed for star, tree and mesh topology in beacon disable mode and beacon enable mode by varying CBR traffic loads.

  19. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  20. The continuous automatic monitoring network installed in Tuscany (Italy) since late 2002, to study earthquake precursory phenomena

    Science.gov (United States)

    Pierotti, Lisa; Cioni, Roberto

    2010-05-01

    Since late 2002, a continuous automatic monitoring network (CAMN) was designed, built and installed in Tuscany (Italy), in order to investigate and define the geochemical response of the aquifers to the local seismic activity. The purpose of the investigation was to identify eventual earthquake precursors. The CAMN is constituted by two groups of five measurement stations each. A first group has been installed in the Serchio and Magra graben (Garfagnana and Lunigiana Valleys, Northern Tuscany), while the second one, in the area of Mt. Amiata (Southern Tuscany), an extinct volcano. Garfagnana, Lunigiana and Mt. Amiata regions belong to the inner zone of the Northern Apennine fold-and-thrust belt. This zone has been involved in the post-collision extensional tectonics since the Upper Miocene-Pliocene. Such tectonic activity has produced horst and graben structures oriented from N-S to NW-SE that are transferred by NE-SW system. Both Garfagnana (Serchio graben) and Lunigiana (Magra graben) belong to the most inner sector of the belt where the seismic sources, responsible for the strongest earthquakes of the northern Apennine, are located (e.g. the M=6.5 earthquake of September 1920). The extensional processes in southern Tuscany have been accompanied by magmatic activity since the Upper Miocene, developing effusive and intrusive products traditionally attributed to the so-called Tuscan Magmatic Province. Mt. Amiata, whose magmatic activity ceased about 0.3 M.y. ago, belongs to the extensive Tyrrhenian sector that is characterized by high heat flow and crustal thinning. The whole zone is characterized by wide-spread but moderate seismicity (the maximum recorded magnitude has been 5.1 with epicentre in Piancastagnaio, 1919). The extensional regime in both the Garfagnana-Lunigiana and Mt. Amiata area is confirmed by the focal mechanisms of recent earthquakes. An essential phase of the monitoring activities has been the selection of suitable sites for the installation of

  1. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.

    2005-01-01

    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  2. Numerical simulations (2D) on the influence of pre-existing local structures and seismic source characteristics in earthquake-volcano interactions

    Science.gov (United States)

    Farías, Cristian; Galván, Boris; Miller, Stephen A.

    2017-09-01

    Earthquake triggering of hydrothermal and volcanic systems is ubiquitous, but the underlying processes driving these systems are not well-understood. We numerically investigate the influence of seismic wave interaction with volcanic systems simulated as a trapped, high-pressure fluid reservoir connected to a fluid-filled fault system in a 2-D poroelastic medium. Different orientations and earthquake magnitudes are studied to quantify dynamic and static stress, and pore pressure changes induced by a seismic event. Results show that although the response of the system is mainly dominated by characteristics of the radiated seismic waves, local structures can also play an important role on the system dynamics. The fluid reservoir affects the seismic wave front, distorts the static overpressure pattern induced by the earthquake, and concentrates the kinetic energy of the incoming wave on its boundaries. The static volumetric stress pattern inside the fault system is also affected by the local structures. Our results show that local faults play an important role in earthquake-volcanic systems dynamics by concentrating kinetic energy inside and acting as wave-guides that have a breakwater-like behavior. This generates sudden changes in pore pressure, volumetric expansion, and stress gradients. Local structures also influence the regional Coulomb yield function. Our results show that local structures affect the dynamics of volcanic and hydrothermal systems, and should be taken into account when investigating triggering of these systems from nearby or distant earthquakes.

  3. Determination of Design Basis Earthquake ground motion

    International Nuclear Information System (INIS)

    Kato, Muneaki

    1997-01-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  4. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)

    1997-03-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  5. SIMULATION OF NEGATIVE PRESSURE WAVE PROPAGATION IN WATER PIPE NETWORK

    Directory of Open Access Journals (Sweden)

    Tang Van Lam

    2017-11-01

    Full Text Available Subject: factors such as pipe wall roughness, mechanical properties of pipe materials, physical properties of water affect the pressure surge in the water supply pipes. These factors make it difficult to analyze the transient problem of pressure evolution using simple programming language, especially in the studies that consider only the magnitude of the positive pressure surge with the negative pressure phase being neglected. Research objectives: determine the magnitude of the negative pressure in the pipes on the experimental model. The propagation distance of the negative pressure wave will be simulated by the valve closure scenarios with the help of the HAMMER software and it is compared with an experimental model to verify the quality the results. Materials and methods: academic version of the Bentley HAMMER software is used to simulate the pressure surge wave propagation due to closure of the valve in water supply pipe network. The method of characteristics is used to solve the governing equations of transient process of pressure change in the pipeline. This method is implemented in the HAMMER software to calculate the pressure surge value in the pipes. Results: the method has been applied for water pipe networks of experimental model, the results show the affected area of negative pressure wave from valve closure and thereby we assess the largest negative pressure that may appear in water supply pipes. Conclusions: the experiment simulates the water pipe network with a consumption node for various valve closure scenarios to determine possibility of appearance of maximum negative pressure value in the pipes. Determination of these values in real-life network is relatively costly and time-consuming but nevertheless necessary for identification of the risk of pipe failure, and therefore, this paper proposes using the simulation model by the HAMMER software. Initial calibration of the model combined with the software simulation results and

  6. OPNET simulation Signaling System No.7 (SS7) network interfaces

    OpenAIRE

    Ow, Kong Chung.

    2000-01-01

    This thesis presents an OPNET model and simulation of the Signaling System No.7 (SS7) network, which is dubbed the world's largest data communications network. The main focus of the study is to model one of its levels, the Message Transfer Part Level 3, in accordance with the ITU.T recommendation Q.704. An overview of SS7 that includes the evolution and basics of SS7 architecture is provided to familarize the reader with the topic. This includes the protocol stack, signaling points, signaling...

  7. Simulating market dynamics: interactions between consumer psychology and social networks.

    Science.gov (United States)

    Janssen, Marco A; Jager, Wander

    2003-01-01

    Markets can show different types of dynamics, from quiet markets dominated by one or a few products, to markets with continual penetration of new and reintroduced products. In a previous article we explored the dynamics of markets from a psychological perspective using a multi-agent simulation model. The main results indicated that the behavioral rules dominating the artificial consumer's decision making determine the resulting market dynamics, such as fashions, lock-in, and unstable renewal. Results also show the importance of psychological variables like social networks, preferences, and the need for identity to explain the dynamics of markets. In this article we extend this work in two directions. First, we will focus on a more systematic investigation of the effects of different network structures. The previous article was based on Watts and Strogatz's approach, which describes the small-world and clustering characteristics in networks. More recent research demonstrated that many large networks display a scale-free power-law distribution for node connectivity. In terms of market dynamics this may imply that a small proportion of consumers may have an exceptional influence on the consumptive behavior of others (hubs, or early adapters). We show that market dynamics is a self-organized property depending on the interaction between the agents' decision-making process (heuristics), the product characteristics (degree of satisfaction of unit of consumption, visibility), and the structure of interactions between agents (size of network and hubs in a social network).

  8. Seismicity in the block mountains between Halle and Leipzig, Central Germany: centroid moment tensors, ground motion simulation, and felt intensities of two M ≈ 3 earthquakes in 2015 and 2017

    Science.gov (United States)

    Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice

    2018-05-01

    On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.

  9. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  10. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  11. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.

  12. [Simulation of lung motions using an artificial neural network].

    Science.gov (United States)

    Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L

    2011-04-01

    A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  13. Simulation of lung motions using an artificial neural network

    International Nuclear Information System (INIS)

    Laurent, R.; Henriet, J.; Sauget, M.; Gschwind, R.; Makovicka, L.; Salomon, M.; Nguyen, F.

    2011-01-01

    Purpose. A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. Patients and methods. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. Results. - The first results are promising: an average accuracy of 1 mm is obtained for a spatial resolution of 1 x 1 x 2.5 mm 3 . Conclusion. We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. (authors)

  14. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  15. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  16. Comparative Analysis of Disruption Tolerant Network Routing Simulations in the One and NS-3

    Science.gov (United States)

    2017-12-01

    The added levels of simulation increase the processing required by a simulation . ns-3’s simulation of other layers of the network stack permits...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3...Thesis 03-23-2016 to 12-15-2017 4. TITLE AND SUBTITLE COMPARATIVE ANALYSIS OF DISRUPTION TOLERANT NETWORK ROUTING SIMULATIONS IN THE ONE AND NS-3 5

  17. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  18. An artifical neural network for detection of simulated dental caries

    International Nuclear Information System (INIS)

    Kositbowornchai, S.; Siriteptawee, S.; Plermkamon, S.; Bureerat, S.; Chetchotsak, D.

    2006-01-01

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  19. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  20. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  1. Real-Time-Simulation of IEEE-5-Bus Network on OPAL-RT-OP4510 Simulator

    Science.gov (United States)

    Atul Bhandakkar, Anjali; Mathew, Lini, Dr.

    2018-03-01

    The Real-Time Simulator tools have high computing technologies, improved performance. They are widely used for design and improvement of electrical systems. The advancement of the software tools like MATLAB/SIMULINK with its Real-Time Workshop (RTW) and Real-Time Windows Target (RTWT), real-time simulators are used extensively in many engineering fields, such as industry, education, and research institutions. OPAL-RT-OP4510 is a Real-Time Simulator which is used in both industry and academia. In this paper, the real-time simulation of IEEE-5-Bus network is carried out by means of OPAL-RT-OP4510 with CRO and other hardware. The performance of the network is observed with the introduction of fault at various locations. The waveforms of voltage, current, active and reactive power are observed in the MATLAB simulation environment and on the CRO. Also, Load Flow Analysis (LFA) of IEEE-5-Bus network is computed using MATLAB/Simulink power-gui load flow tool.

  2. Hybrid Broadband Ground-Motion Simulation Using Scenario Earthquakes for the Istanbul Area

    KAUST Repository

    Reshi, Owais A.

    2016-01-01

    are critical for determining the behavior of ground motions especially in the near-field. Comparison of simulated ground motion intensities with ground-motion prediction quations indicates the need of development of the region-specific ground-motion prediction

  3. Strong Motion Network of Medellín and Aburrá Valley: technical advances, seismicity records and micro-earthquake monitoring

    Science.gov (United States)

    Posada, G.; Trujillo, J. C., Sr.; Hoyos, C.; Monsalve, G.

    2017-12-01

    The tectonics setting of Colombia is determined by the interaction of Nazca, Caribbean and South American plates, together with the Panama-Choco block collision, which makes a seismically active region. Regional seismic monitoring is carried out by the National Seismological Network of Colombia and the Accelerometer National Network of Colombia. Both networks calculate locations, magnitudes, depths and accelerations, and other seismic parameters. The Medellín - Aburra Valley is located in the Northern segment of the Central Cordillera of Colombia, and according to the Colombian technical seismic norm (NSR-10), is a region of intermediate hazard, because of the proximity to seismic sources of the Valley. Seismic monitoring in the Aburra Valley began in 1996 with an accelerometer network which consisted of 38 instruments. Currently, the network consists of 26 stations and is run by the Early Warning System of Medellin and Aburra Valley (SIATA). The technical advances have allowed the real-time communication since a year ago, currently with 10 stations; post-earthquake data is processed through operationally near-real-time, obtaining quick results in terms of location, acceleration, spectrum response and Fourier analysis; this information is displayed at the SIATA web site. The strong motion database is composed by 280 earthquakes; this information is the basis for the estimation of seismic hazards and risk for the region. A basic statistical analysis of the main information was carried out, including the total recorded events per station, natural frequency, maximum accelerations, depths and magnitudes, which allowed us to identify the main seismic sources, and some seismic site parameters. With the idea of a more complete seismic monitoring and in order to identify seismic sources beneath the Valley, we are in the process of installing 10 low-cost shake seismometers for micro-earthquake monitoring. There is no historical record of earthquakes with a magnitude

  4. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  5. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  6. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA). MIU Underground Rock Laboratory support during H-14 involved discrete fracture network (DFN) modelling in support of the Multiple Modelling Project (MMP) and the Long Term Pumping Test (LPT). Golder developed updated DFN models for the MIU site, reflecting updated analyses of fracture data. Golder also developed scripts to support JNC simulations of flow and transport pathways within the MMP. Golder supported JNC participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-14. Task 6A and 6B compared safety assessment (PA) and experimental time scale simulations along a pipe transport pathway. Task 6B2 extended Task 6B simulations from 1-D to 2-D. For Task 6B2, Golder carried out single fracture transport simulations on a wide variety of generic heterogeneous 2D fractures using both experimental and safety assessment boundary conditions. The heterogeneous 2D fractures were implemented according to a variety of in plane heterogeneity patterns. Multiple immobile zones were considered including stagnant zones, infillings, altered wall rock, and intact rock. During H-14, JNC carried out extensive studies of the distributed rock zone (DRZ) surrounding repository tunnels and drifts. Golder supported this activity be evaluating the calculation time necessary for simulating a reference heterogeneous DRZ cell network for a range of computational strategies. To support the development of JNC's total system performance assessment (TSPA) strategy, Golder carried out a review of the US DOE Yucca Mountain Project TSPA. This

  7. Coarse-grained simulation of a real-time process control network under peak load

    International Nuclear Information System (INIS)

    George, A.D.; Clapp, N.E. Jr.

    1992-01-01

    This paper presents a simulation study on the real-time process control network proposed for the new ANS reactor system at ORNL. A background discussion is provided on networks, modeling, and simulation, followed by an overview of the ANS process control network, its three peak-load models, and the results of a series of coarse-grained simulation studies carried out on these models using implementations of 802.3, 802.4, and 802.5 standard local area networks

  8. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya'an Earthquake.

    Science.gov (United States)

    Li, Zhichao; Chen, Yao; Suo, Liming

    2015-01-01

    In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants' therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. This paper described a field study in an earthquake-stricken area of Ya'an. A set of 3-stage follow-up data was obtained concerning with the villagers' participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers' social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters.

  9. Numerical tsunami simulations in the western Pacific Ocean and East China Sea from hypothetical M 9 earthquakes along the Nankai trough

    Science.gov (United States)

    Harada, Tomoya; Satake, Kenji; Furumura, Takashi

    2017-04-01

    We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai

  10. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip

  11. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2004-01-01

    During Heisei-15, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU Underground Rock Laboratory support during H-15 involved development of new discrete fracture network (DFN) models for the MIU Shoba-sama Site, in the region of shaft development. Golder developed three DFN models for the site using discrete fracture network, equivalent porous medium (EPM), and nested DFN/EPM approaches. Each of these models were compared based upon criteria established for the multiple modeling project (MMP). Golder supported JNC participation in Task 6AB, 6D and 6E of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-15. For Task 6AB, Golder implemented an updated microstructural model in GoldSim, and used this updated model to simulate the propagation of uncertainty from experimental to safety assessment time scales, for 5 m scale transport path lengths. Task 6D and 6E compared safety assessment (PA) and experimental time scale simulations in a 200 m scale discrete fracture network. For Task 6D, Golder implemented a DFN model using FracMan/PA Works, and determined the sensitivity of solute transport to a range of material property and geometric assumptions. For Task 6E, Golder carried out demonstration FracMan/PA Works transport calculations at a 1 million year time scale, to ensure that task specifications are realistic. The majority of work for Task 6E will be carried out during H-16. During H-15, Golder supported JNC's Total System Performance Assessment (TSPO) strategy by developing technologies for the analysis of precipitant concentration. These approaches were based on the GoldSim precipitant data management features, and were

  12. Waveform through the subducted plate under the Tokyo region in Japan observed by a ultra-dense seismic network (MeSO-net) and seismic activity around mega-thrust earthquakes area

    Science.gov (United States)

    Sakai, S.; Kasahara, K.; Nanjo, K.; Nakagawa, S.; Tsuruoka, H.; Morita, Y.; Kato, A.; Iidaka, T.; Hirata, N.; Tanada, T.; Obara, K.; Sekine, S.; Kurashimo, E.

    2009-12-01

    In central Japan, the Philippine Sea plate (PSP) subducts beneath the Tokyo Metropolitan area, the Kanto region, where it causes mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater earthquake in this region at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates the next great earthquake will cause 11,000 fatalities and 112 trillion yen (1 trillion US$) economic loss. This great earthquake is evaluated to occur with a probability of 70 % in 30 years by the Earthquake Research Committee of Japan. We had started the Special Project for Earthquake Disaster Mitigation in Tokyo Metropolitan area (2007-2012). Under this project, the construction of the Metropolitan Seismic Observation network (MeSO-net) that consists of about 400 observation sites was started [Kasahara et al., 2008; Nakagawa et al., 2008]. Now, we had 178 observation sites. The correlation of the wave is high because the observation point is deployed at about 2 km intervals, and the identification of the later phase is recognized easily thought artificial noise is very large. We also discuss the relation between a deformation of PSP and intra-plate M7+ earthquakes: the PSP is subducting beneath the Honshu arc and also colliding with the Pacific plate. The subduction and collision both contribute active seismicity in the Kanto region. We are going to present a high resolution tomographic image to show low velocity zone which suggests a possible internal failure of the plate; a source region of the M7+ intra-plate earthquake. Our study will contribute a new assessment of the seismic hazard at the Metropolitan area in Japan. Acknowledgement: This study was supported by the Earthquake Research Institute cooperative research program.

  13. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as......: smaller dimensions, less weight, fewer conductors, no reactive power considerations, and less overall losses due to the absence of proximity and skin effects. This work describes a study about the simulation of a Medium Voltage DC (MVDC) grid in an offshore wind farm. Suitable converter concepts...

  14. Impact of the 2001 Tohoku-oki earthquake to Tokyo Metropolitan area observed by the Metropolitan Seismic Observation network (MeSO-net)

    Science.gov (United States)

    Hirata, N.; Hayashi, H.; Nakagawa, S.; Sakai, S.; Honda, R.; Kasahara, K.; Obara, K.; Aketagawa, T.; Kimura, H.; Sato, H.; Okaya, D. A.

    2011-12-01

    The March 11, 2011 Tohoku-oki earthquake brought a great impact to the Tokyo metropolitan area in both seismological aspect and seismic risk management although Tokyo is located 340 km from the epicenter. The event generated very strong ground motion even in the metropolitan area and resulted severe requifaction in many places of Kanto district. National and local governments have started to discuss counter measurement for possible seismic risks in the area taking account for what they learned from the Tohoku-oki event which is much larger than ever experienced in Japan Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9). An M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that an M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. We will discuss the main results that are obtained in the respective fields which have been integrated to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area; the project has been much improved after the Tohoku event. In order to image seismic structure beneath the Metropolitan Tokyo area we have developed Metropolitan Seismic Observation network (MeSO-net; Hirata et al., 2009). We have installed 296 seismic stations every few km (Kasahara et al., 2011). We conducted seismic

  15. Neural Networks Simulation of the Transport of Contaminants in Groundwater

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2009-12-01

    Full Text Available The performance assessment of an engineered solution for the disposal of radioactive wastes is based on mathematical models of the disposal system response to predefined accidental scenarios, within a probabilistic approach to account for the involved uncertainties. As the most significant potential pathway for the return of radionuclides to the biosphere is groundwater flow, intensive computational efforts are devoted to simulating the behaviour of the groundwater system surrounding the waste deposit, for different values of its hydrogeological parameters and for different evolution scenarios. In this paper, multilayered neural networks are trained to simulate the transport of contaminants in monodimensional and bidimensional aquifers. The results obtained in two case studies indicate that the approximation errors are within the uncertainties which characterize the input data.

  16. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  17. Dynamic Simulation of the 2011 M9.0 Tohoku Earthquake with Geometric Complexity on a Rate- and State-dependent Subduction Plane

    Science.gov (United States)

    Luo, B.; Duan, B.

    2015-12-01

    The Mw 9.0 Tohoku megathrust earthquake on 11 March 2011 is a great surprise to the scientific community due to its unexpected occurrence on the subduction zone of Japan Trench where earthquakes of magnitude ~7 to 8 are expected based on historical records. Slip distribution and kinematic slip history inverted from seismic data, GPS and tsunami recordings reveal two major aspects of this big event: a strong asperity near the hypocenter and large slip near the trench. To investigate physical conditions of these two aspects, we perform dynamic rupture simulations on a shallow-dipping rate- and state-dependent subduction plane with topographic relief. Although existence of a subducted seamount just up-dip of the hypocenter is still an open question, high Vp anomalies [Zhao et al., 2011] and low Vp/Vs anomalies [Yamamoto et al., 2014] there strongly suggest some kind of topographic relief exists there. We explicitly incorporate a subducted seamount on the subduction surface into our models. Our preliminary results show that the subducted seamount play a significant role in dynamic rupture propagation due to the alteration of the stress state around it. We find that a subducted seamount can act as a strong barrier to many earthquakes, but its ultimate failure after some earthquake cycles results in giant earthquakes. Its failure gives rise to large stress drop, resulting in a strong asperity in slip distribution as revealed in kinematic inversions. Our preliminary results also suggest that the rate- and state- friction law plays an important role in rupture propagation of geometrically complex faults. Although rate-strengthening behavior near the trench impedes rupture propagation, an energetic rupture can break such a barrier and manage to reach the trench, resulting in significant uplift at seafloor and hence devastating tsunami to human society.

  18. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    Science.gov (United States)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (< 5%) for a wide range of inlet-to-initial pressure ratios. A Fast Fourier Transform is preformed on the pressure oscillations to predict the various modal frequencies of the pressure wave. The shutdown problem, i.e. valve closing problem, the simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  19. Simulating Earthquake Rupture and Off-Fault Fracture Response: Application to the Safety Assessment of the Swedish Nuclear Waste Repository

    KAUST Repository

    Falth, B.; Hokmark, H.; Lund, B.; Mai, Paul Martin; Roberts, R.; Munier, R.

    2014-01-01

    To assess the long-term safety of a deep repository of spent nuclear fuel, upper bound estimates of seismically induced secondary fracture shear displacements are needed. For this purpose, we analyze a model including an earthquake fault, which

  20. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  1. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  2. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  3. Simulating Real-Time Aspects of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Christian Nastasi

    2010-01-01

    Full Text Available Wireless Sensor Networks (WSNs technology has been mainly used in the applications with low-frequency sampling and little computational complexity. Recently, new classes of WSN-based applications with different characteristics are being considered, including process control, industrial automation and visual surveillance. Such new applications usually involve relatively heavy computations and also present real-time requirements as bounded end-to- end delay and guaranteed Quality of Service. It becomes then necessary to employ proper resource management policies, not only for communication resources but also jointly for computing resources, in the design and development of such WSN-based applications. In this context, simulation can play a critical role, together with analytical models, for validating a system design against the parameters of Quality of Service demanded for. In this paper, we present RTNS, a publicly available free simulation tool which includes Operating System aspects in wireless distributed applications. RTNS extends the well-known NS-2 simulator with models of the CPU, the Real-Time Operating System and the application tasks, to take into account delays due to the computation in addition to the communication. We demonstrate the benefits of RTNS by presenting our simulation study for a complex WSN-based multi-view vision system for real-time event detection.

  4. STEADY-STATE modeling and simulation of pipeline networks for compressible fluids

    Directory of Open Access Journals (Sweden)

    A.L.H. Costa

    1998-12-01

    Full Text Available This paper presents a model and an algorithm for the simulation of pipeline networks with compressible fluids. The model can predict pressures, flow rates, temperatures and gas compositions at any point of the network. Any network configuration can be simulated; the existence of cycles is not an obstacle. Numerical results from simulated data on a proposed network are shown for illustration. The potential of the simulator is explored by the analysis of a pressure relief network, using a stochastic procedure for the evaluation of system performance.

  5. Evaluation of the Pseudostatic Analyses of Earth Dams Using FE Simulation and Observed Earthquake-Induced Deformations: Case Studies of Upper San Fernando and Kitayama Dams

    Directory of Open Access Journals (Sweden)

    Tohid Akhlaghi

    2014-01-01

    Full Text Available Evaluation of the accuracy of the pseudostatic approach is governed by the accuracy with which the simple pseudostatic inertial forces represent the complex dynamic inertial forces that actually exist in an earthquake. In this study, the Upper San Fernando and Kitayama earth dams, which have been designed using the pseudostatic approach and damaged during the 1971 San Fernando and 1995 Kobe earthquakes, were investigated and analyzed. The finite element models of the dams were prepared based on the detailed available data and results of in situ and laboratory material tests. Dynamic analyses were conducted to simulate the earthquake-induced deformations of the dams using the computer program Plaxis code. Then the pseudostatic seismic coefficient used in the design and analyses of the dams were compared with the seismic coefficients obtained from dynamic analyses of the simulated model as well as the other available proposed pseudostatic correlations. Based on the comparisons made, the accuracy and reliability of the pseudostatic seismic coefficients are evaluated and discussed.

  6. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  7. Performance of Irikura's Recipe Rupture Model Generator in Earthquake Ground Motion Simulations as Implemented in the Graves and Pitarka Hybrid Approach.

    Energy Technology Data Exchange (ETDEWEB)

    Pitarka, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-22

    We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity-­ based kinematic rupture model generator, as implemented in the hybrid broadband ground-­motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 -­ 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-­West2 Project (NGA-­West2) ground-­motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-­fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.

  8. Contribution of the Surface and Down-Hole Seismic Networks to the Location of Earthquakes at the Soultz-sous-Forêts Geothermal Site (France)

    Science.gov (United States)

    Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.

    2018-03-01

    Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity

  9. Wireless Power Transfer Protocols in Sensor Networks: Experiments and Simulations

    Directory of Open Access Journals (Sweden)

    Sotiris Nikoletseas

    2017-04-01

    Full Text Available Rapid technological advances in the domain of Wireless Power Transfer pave the way for novel methods for power management in systems of wireless devices, and recent research works have already started considering algorithmic solutions for tackling emerging problems. In this paper, we investigate the problem of efficient and balanced Wireless Power Transfer in Wireless Sensor Networks. We employ wireless chargers that replenish the energy of network nodes. We propose two protocols that configure the activity of the chargers. One protocol performs wireless charging focused on the charging efficiency, while the other aims at proper balance of the chargers’ residual energy. We conduct detailed experiments using real devices and we validate the experimental results via larger scale simulations. We observe that, in both the experimental evaluation and the evaluation through detailed simulations, both protocols achieve their main goals. The Charging Oriented protocol achieves good charging efficiency throughout the experiment, while the Energy Balancing protocol achieves a uniform distribution of energy within the chargers.

  10. Simulation of a Dispersive Tsunami due to the 2016 El Salvador-Nicaragua Outer-Rise Earthquake (M w 6.9)

    Science.gov (United States)

    Tanioka, Yuichiro; Ramirez, Amilcar Geovanny Cabrera; Yamanaka, Yusuke

    2018-01-01

    The 2016 El Salvador-Nicaragua outer-rise earthquake (M w 6.9) generated a small tsunami observed at the ocean bottom pressure sensor, DART 32411, in the Pacific Ocean off Central America. The dispersive observed tsunami is well simulated using the linear Boussinesq equations. From the dispersive character of tsunami waveform, the fault length and width of the outer-rise event is estimated to be 30 and 15 km, respectively. The estimated seismic moment of 3.16 × 1019 Nm is the same as the estimation in the Global CMT catalog. The dispersive character of the tsunami in the deep ocean caused by the 2016 outer-rise El Salvador-Nicaragua earthquake could constrain the fault size and the slip amount or the seismic moment of the event.

  11. Simulation of a Dispersive Tsunami due to the 2016 El Salvador-Nicaragua Outer-Rise Earthquake ( M w 6.9)

    Science.gov (United States)

    Tanioka, Yuichiro; Ramirez, Amilcar Geovanny Cabrera; Yamanaka, Yusuke

    2018-04-01

    The 2016 El Salvador-Nicaragua outer-rise earthquake ( M w 6.9) generated a small tsunami observed at the ocean bottom pressure sensor, DART 32411, in the Pacific Ocean off Central America. The dispersive observed tsunami is well simulated using the linear Boussinesq equations. From the dispersive character of tsunami waveform, the fault length and width of the outer-rise event is estimated to be 30 and 15 km, respectively. The estimated seismic moment of 3.16 × 1019 Nm is the same as the estimation in the Global CMT catalog. The dispersive character of the tsunami in the deep ocean caused by the 2016 outer-rise El Salvador-Nicaragua earthquake could constrain the fault size and the slip amount or the seismic moment of the event.

  12. Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles

    Directory of Open Access Journals (Sweden)

    Shah Imran

    2011-07-01

    Full Text Available Abstract Background With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate the physiological effect of chemicals, including potential toxicity. Here we investigate a biologically motivated model for estimating tissue level responses by aggregating the behavior of a cell population. We assume that the molecular state of individual cells is independently governed by discrete non-deterministic signaling mechanisms. This results in noisy but highly reproducible aggregate level responses that are consistent with experimental data. Results We developed an asynchronous threshold Boolean network simulation algorithm to model signal transduction in a single cell, and then used an ensemble of these models to estimate the aggregate response across a cell population. Using published data, we derived a putative crosstalk network involving growth factors and cytokines - i.e., Epidermal Growth Factor, Insulin, Insulin like Growth Factor Type 1, and Tumor Necrosis Factor α - to describe early signaling events in cell proliferation signal transduction. Reproducibility of the modeling technique across ensembles of Boolean networks representing cell populations is investigated. Furthermore, we compare our simulation results to experimental observations of hepatocytes reported in the literature. Conclusion A systematic analysis of the results following differential stimulation of this model by growth factors and cytokines suggests that: (a using Boolean network ensembles with asynchronous updating provides biologically plausible noisy individual cellular responses with reproducible mean behavior for large cell populations, and (b with sufficient data our model can estimate the response to different concentrations of extracellular ligands. Our

  13. New approach for simulating groundwater flow in discrete fracture network

    Science.gov (United States)

    Fang, H.; Zhu, J.

    2017-12-01

    In this study, we develop a new approach to calculate groundwater flowrate and hydraulic head distribution in two-dimensional discrete fracture network (DFN) where both laminar and turbulent flows co-exist in individual fractures. The cubic law is used to calculate hydraulic head distribution and flow behaviors in fractures where flow is laminar, while the Forchheimer's law is used to quantify turbulent flow behaviors. Reynolds number is used to distinguish flow characteristics in individual fractures. The combination of linear and non-linear equations is solved iteratively to determine flowrates in all fractures and hydraulic heads at all intersections. We examine potential errors in both flowrate and hydraulic head from the approach of uniform flow assumption. Applying the cubic law in all fractures regardless of actual flow conditions overestimates the flowrate when turbulent flow may exist while applying the Forchheimer's law indiscriminately underestimate the flowrate when laminar flows exist in the network. The contrast of apertures of large and small fractures in the DFN has significant impact on the potential errors of using only the cubic law or the Forchheimer's law. Both the cubic law and Forchheimer's law simulate similar hydraulic head distributions as the main difference between these two approaches lies in predicting different flowrates. Fracture irregularity does not significantly affect the potential errors from using only the cubic law or the Forchheimer's law if network configuration remains similar. Relative density of fractures does not significantly affect the relative performance of the cubic law and Forchheimer's law.

  14. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  15. GPS Monitoring of Surface Change During and Following the Fortuitous Occurrence of the M(sub w) = 7.3 Landers Earthquake in our Network

    Science.gov (United States)

    Miller, M. Meghan

    1998-01-01

    Accomplishments: (1) Continues GPS monitoring of surface change during and following the fortuitous occurrence of the M(sub w) = 7.3 Landers earthquake in our network, in order to characterize earthquake dynamics and accelerated activity of related faults as far as 100's of kilometers along strike. (2) Integrates the geodetic constraints into consistent kinematic descriptions of the deformation field that can in turn be used to characterize the processes that drive geodynamics, including seismic cycle dynamics. In 1991, we installed and occupied a high precision GPS geodetic network to measure transform-related deformation that is partitioned from the Pacific - North America plate boundary northeastward through the Mojave Desert, via the Eastern California shear zone to the Walker Lane. The onset of the M(sub w) = 7.3 June 28, 1992, Landers, California, earthquake sequence within this network poses unique opportunities for continued monitoring of regional surface deformation related to the culmination of a major seismic cycle, characterization of the dynamic behavior of continental lithosphere during the seismic sequence, and post-seismic transient deformation. During the last year, we have reprocessed all three previous epochs for which JPL fiducial free point positioning products available and are queued for the remaining needed products, completed two field campaigns monitoring approx. 20 sites (October 1995 and September 1996), begun modeling by development of a finite element mesh based on network station locations, and developed manuscripts dealing with both the Landers-related transient deformation at the latitude of Lone Pine and the velocity field of the whole experiment. We are currently deploying a 1997 observation campaign (June 1997). We use GPS geodetic studies to characterize deformation in the Mojave Desert region and related structural domains to the north, and geophysical modeling of lithospheric behavior. The modeling is constrained by our

  16. The Puerto Rico Seismic Network Broadcast System: A user friendly GUI to broadcast earthquake messages, to generate shakemaps and to update catalogues

    Science.gov (United States)

    Velez, J.; Huerfano, V.; von Hillebrandt, C.

    2007-12-01

    The Puerto Rico Seismic Network (PRSN) has historically provided locations and magnitudes for earthquakes in the Puerto Rico and Virgin Islands (PRVI) region. PRSN is the reporting authority for the region bounded by latitudes 17.0N to 20.0N, and longitudes 63.5W to 69.0W. The main objective of the PRSN is to record, process, analyze, provide information and research local, regional and teleseismic earthquakes, providing high quality data and information to be able to respond to the needs of the emergency management, academic and research communities, and the general public. The PRSN runs Earthworm software (Johnson et al, 1995) to acquire and write waveforms to disk for permanent archival. Automatic locations and alerts are generated for events in Puerto Rico, the Intra America Seas, and the Atlantic by the EarlyBird system (Whitmore and Sokolowski, 2002), which monitors PRSN stations as well as some 40 additional stations run by networks operating in North, Central and South America and other sites in the Caribbean. PRDANIS (Puerto Rico Data Analysis and Information System) software, developed by PRSN, supports manual locations and analyst review of automatic locations of events within the PRSN area of responsibility (AOR), using all the broadband, strong-motion and short-period waveforms Rapidly available information regarding the geographic distribution of ground shaking in relation to the population and infrastructure at risk can assist emergency response communities in efficient and optimized allocation of resources following a large earthquake. The ShakeMap system developed by the USGS provides near real-time maps of instrumental ground motions and shaking intensity and has proven effective in rapid assessment of the extent of shaking and potential damage after significant earthquakes (Wald, 2004). In Northern and Southern California, the Pacific Northwest, and the states of Utah and Nevada, ShakeMaps are used for emergency planning and response, loss

  17. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    Science.gov (United States)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  18. Building Infrastructure for Preservation and Publication of Earthquake Engineering Research Data

    Directory of Open Access Journals (Sweden)

    Stanislav Pejša

    2014-10-01

    Full Text Available The objective of this paper is to showcase the progress of the earthquake engineering community during a decade-long effort supported by the National Science Foundation in the George E. Brown Jr., Network for Earthquake Engineering Simulation (NEES. During the four years that NEES network operations have been headquartered at Purdue University, the NEEScomm management team has facilitated an unprecedented cultural change in the ways research is performed in earthquake engineering. NEES has not only played a major role in advancing the cyberinfrastructure required for transformative engineering research, but NEES research outcomes are making an impact by contributing to safer structures throughout the USA and abroad. This paper reflects on some of the developments and initiatives that helped instil change in the ways that the earthquake engineering and tsunami community share and reuse data and collaborate in general.

  19. Far field tsunami simulations of the 1755 Lisbon earthquake: Implications for tsunami hazard to the U.S. East Coast and the Caribbean

    Science.gov (United States)

    Barkan, R.; ten Brink, Uri S.; Lin, J.

    2009-01-01

    The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East

  20. Linking fault permeability, fluid flow, and earthquake triggering in a hydrothermally active tectonic setting: Numerical Simulations of the hydrodynamics in the Tjörnes Fracture Zone, Iceland.

    Science.gov (United States)

    Lupi, M.; Geiger, S.; Graham, C.; Claesson, L.; Richter, B.

    2007-12-01

    A good insight into the transient fluid flow evolution within a hydrothermal system is of primary importance for the understanding of several geologic processes, for example the hydrodynamic triggering of earthquakes or the formation of mineral deposits. The strong permeability contrast between different crustal layers as well as the high geothermal gradient of these areas are elements that strongly affect the flow behaviour. In addition, the sudden and transient occurrence of joints, faults and magmatic intrusions are likely to change the hydrothermal flow paths in very short time. The Tjörnes Fracture Zone (TFZ) north of Iceland, is such a hydrothermal area where a high geothermal gradient, magmatic bodies, faults, and the strong contrast between sediments and fractured lava layers govern the large-scale fluid flow. The TFZ offsets the Kolbeinsey Ridge and the Northern Rift Zone. It is characterized by km-scale faults that link sub-seafloor sediments and lava layers with deeper crystalline rocks. These structures focus fluid flow and allow for the mixing between cold seawater and deep hydrothermal fluids. A strong seismic activity is present in the TFZ: earthquakes up to magnitude 7 have been recorded over the past years. Hydrogeochemical changes before, during and after a magnitude 5.8 earthquake suggest that the evolving stress state before the earthquake leads to (remote) permeability variations, which alter the fluid flow paths. This is in agreement with recent numerical fluid flow simulations which demonstrate that fluid flow in magmatic- hydrothermal systems is often convective and very sensitive to small variations in permeability. In order to understand the transient fluid flow behaviour in this complex geological environment, we have conducted numerical simulations of heat and mass transport in two geologically realistic cross-sectional models of the TFZ. The geologic models are discretised using finite element and finite volume methods. They hence have

  1. Comparison of Structurally Controlled Landslide Hazard Simulation to the Co-seismic Landslides Caused by the M 7.2 2013 Bohol Earthquake.

    Science.gov (United States)

    Galang, J. A. M. B.; Eco, R. C.; Lagmay, A. M. A.

    2014-12-01

    The M_w 7.2 October 15, 2013 Bohol earthquake is one of the more destructive earthquake to hit the Philippines in the 21st century. The epicenter was located in Sagbayan municipality, central Bohol and was generated by a previously unmapped reverse fault called the "Inabanga Fault". The earthquake resulted in 209 fatalities and over 57 million USD worth of damages. The earthquake generated co-seismic landslides most of which were related to fault structures. Unlike rainfall induced landslides, the trigger for co-seismic landslides happen without warning. Preparations for this type of landslides rely heavily on the identification of fracture-related slope instability. To mitigate the impacts of co-seismic landslide hazards, morpho-structural orientations of discontinuity sets were mapped using remote sensing techniques with the aid of a Digital Terrain Model (DTM) obtained in 2012. The DTM used is an IFSAR derived image with a 5-meter pixel resolution and approximately 0.5 meter vertical accuracy. Coltop 3D software was then used to identify similar structures including measurement of their dip and dip directions. The chosen discontinuity sets were then keyed into Matterocking software to identify potential rock slide zones due to planar or wedged discontinuities. After identifying the structurally-controlled unstable slopes, the rock mass propagation extent of the possible rock slides was simulated using Conefall. Separately, a manually derived landslide inventory has been performed using post-earthquake satellite images and LIDAR. The results were compared to the landslide inventory which identified at least 873 landslides. Out of the 873 landslides identified through the inventory, 786 or 90% intersect the simulated structural-controlled landslide hazard areas of Bohol. The results show the potential of this method to identify co-seismic landslide hazard areas for disaster mitigation. Along with computer methods to simulate shallow landslides, and debris flow

  2. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  3. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  4. Tsunami simulations of mega-thrust earthquakes in the Nankai–Tonankai Trough (Japan) based on stochastic rupture scenarios

    KAUST Repository

    Goda, Katsuichiro

    2017-02-23

    In this study, earthquake rupture models for future mega-thrust earthquakes in the Nankai–Tonankai subduction zone are developed by incorporating the main characteristics of inverted source models of the 2011 Tohoku earthquake. These scenario ruptures also account for key features of the national tsunami source model for the Nankai–Tonankai earthquake by the Central Disaster Management Council of the Japanese Government. The source models capture a wide range of realistic slip distributions and kinematic rupture processes, reflecting the current best understanding of what may happen due to a future mega-earthquake in the Nankai–Tonankai Trough, and therefore are useful for conducting probabilistic tsunami hazard and risk analysis. A large suite of scenario rupture models is then used to investigate the variability of tsunami effects in coastal areas, such as offshore tsunami wave heights and onshore inundation depths, due to realistic variations in source characteristics. Such investigations are particularly valuable for tsunami hazard mapping and evacuation planning in municipalities along the Nankai–Tonankai coast.

  5. Fault structure in the Nepal Himalaya as illuminated by aftershocks of the 2015 Mw 7.8 Gorkha earthquake recorded by the local NAMASTE network

    Science.gov (United States)

    Ghosh, A.; Mendoza, M.; LI, B.; Karplus, M. S.; Nabelek, J.; Sapkota, S. N.; Adhikari, L. B.; Klemperer, S. L.; Velasco, A. A.

    2017-12-01

    Geometry of the Main Himalayan Thrust (MHT), that accommodates majority of the plate motion between Indian and Eurasian plate, is being debated for a long time. Different models have been proposed; some of them are significantly different from others. Obtaining a well constrained geometry of the MHT is challenging mainly because of the lack of high quality data, inherent low resolution and non-uniqueness of the models. We used a dense local seismic network - NAMASTE - to record and analyze a prolific aftershock sequence following the 2015 Mw 7.8 Gorkha earthquake, and determine geometry of the MHT constrained by precisely located well-constrained aftershocks. We detected and located more than 15,000 aftershocks of the Gorkha earthquake using Hypoinverse and then relatively relocated using HypoDD algorithm. We selected about 7,000 earthquakes that are particularly well constrained to analyze the geometry of the megathrust. They illuminate fault structure in this part of the Himalaya with unprecedented detail. The MHT shows two subhorizontal planes connected by a duplex structure. The duplex structure is characterized by multiple steeply dipping planes. In addition, we used four large-aperture continental-scale seismic arrays at teleseismic distances to backproject high-frequency seismic radiation. Moreover, we combined all arrays to significantly increase the resolution and detectability. We imaged rupture propagation of the mainshock showing complexity near the end of the rupture that might help arresting of the rupture to the east. Furthermore, we continuously scanned teleseismic data for two weeks starting from immediately after the mainshock to detect and locate aftershock activity only using the arrays. Spatial pattern of the aftershocks was similar to the existing global catalog using conventional seismic network and technique. However, we detected more than twice as many aftershocks using the array technique compared to the global catalog including many

  6. Imaging Simulations for the Korean VLBI Network (KVN

    Directory of Open Access Journals (Sweden)

    Tae-Hyun Jung

    2005-03-01

    Full Text Available The Korean VLBI Network (KVN will open a new field of research in astronomy, geodesy and earth science using the newest three 21m radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was --6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS software package. As the KVN array has a resolution of about 6 mas (milli arcsecond at 22GHz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

  7. OMG Earthquake! Can Twitter improve earthquake response?

    Science.gov (United States)

    Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.

    2009-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

  8. Computer simulation of the Blumlein pulse forming network

    International Nuclear Information System (INIS)

    Edwards, C.B.

    1981-03-01

    A computer simulation of the Blumlein pulse-forming network is described. The model is able to treat the case of time varying loads, non-zero conductor resistance, and switch closure effects as exhibited by real systems employing non-ohmic loads such as field-emission vacuum diodes in which the impedance is strongly time and voltage dependent. The application of the code to various experimental arrangements is discussed, with particular reference to the prediction of the behaviour of the output circuit of 'ELF', the electron beam generator in operation at the Rutherford Laboratory. The output from the code is compared directly with experimentally obtained voltage waveforms applied to the 'ELF' diode. (author)

  9. Strong-motion observations of the M 7.8 Gorkha, Nepal, earthquake sequence and development of the N-shake strong-motion network

    Science.gov (United States)

    Dixit, Amod; Ringler, Adam; Sumy, Danielle F.; Cochran, Elizabeth S.; Hough, Susan E.; Martin, Stacey; Gibbons, Steven; Luetgert, James H.; Galetzka, John; Shrestha, Surya; Rajaure, Sudhir; McNamara, Daniel E.

    2015-01-01

    We present and describe strong-motion data observations from the 2015 M 7.8 Gorkha, Nepal, earthquake sequence collected using existing and new Quake-Catcher Network (QCN) and U.S. Geological Survey NetQuakes sensors located in the Kathmandu Valley. A comparison of QCN data with waveforms recorded by a conventional strong-motion (NetQuakes) instrument validates the QCN data. We present preliminary analysis of spectral accelerations, and peak ground acceleration and velocity for earthquakes up to M 7.3 from the QCN stations, as well as preliminary analysis of the mainshock recording from the NetQuakes station. We show that mainshock peak accelerations were lower than expected and conclude the Kathmandu Valley experienced a pervasively nonlinear response during the mainshock. Phase picks from the QCN and NetQuakes data are also used to improve aftershock locations. This study confirms the utility of QCN instruments to contribute to ground-motion investigations and aftershock response in regions where conventional instrumentation and open-access seismic data are limited. Initial pilot installations of QCN instruments in 2014 are now being expanded to create the Nepal–Shaking Hazard Assessment for Kathmandu and its Environment (N-SHAKE) network.

  10. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  11. Water and soil loss from landslide deposits as a function of gravel content in the Wenchuan earthquake area, China, revealed by artificial rainfall simulations.

    Science.gov (United States)

    Gan, Fengling; He, Binghui; Wang, Tao

    2018-01-01

    A large number of landslides were triggered by the Mw7.9 Wenchuan earthquake which occurred on 12th May 2008. Landslides impacted extensive areas along the Mingjiang River and its tributaries. In the landslide deposits, soil and gravel fragments generally co-exist and their proportions may influence the hydrological and erosion processes on the steep slopes of the deposit surface. Understanding the effects of the mixtures of soil and gravels in landslide deposits on erosion processes is relevant for ecological reconstruction and water and soil conservation in Wenchuan earthquake area. Based on field surveys, indoor artificial rainfall simulation experiments with three rainfall intensities (1.0, 1.5 and 2.0 mm·min-1) and three proportions of gravel (50%, 66.7% and 80%) were conducted to measure how the proportion of gravel affected soil erosion and sediment yield in landslide sediments and deposits. Where the proportion of gravel was 80%, no surface runoff was produced during the 90 minute experiment under all rainfall intensities. For the 66.7% proportion, no runoff was generated at the lowest rainfall intensity (1.0 mm·min-1). As a result of these interactions, the average sediment yield ranked as 50> 66.6> 80% with different proportions of gravel. In addition, there was a positive correlation between runoff generation and sediment yield, and the sediment yield lagging the runoff generation. Together, the results demonstrate an important role of gravel in moderating the mobilization of landslide sediment produced by large earthquakes, and could lay the foundation for erosion models which provide scientific guidance for the control of landslide sediment in the Wenchuan earthquake zone, China.

  12. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Science.gov (United States)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  13. 3D Ground-Motion Simulations for Magnitude 9 Earthquakes on the Cascadia Megathrust: Sedimentary Basin Amplification, Rupture Directivity, and Ground-Motion Variability

    Science.gov (United States)

    Frankel, A. D.; Wirth, E. A.; Marafi, N.; Vidale, J. E.; Stephenson, W. J.

    2017-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for Mw 9 earthquakes on the Cascadia subduction zone by combining synthetics from 3D finite-difference simulations at low frequencies (≤ 1 Hz) and stochastic synthetics at high frequencies (≥ 1 Hz). These synthetic ground motions are being used to evaluate building response, liquefaction, and landslides, as part of the M9 Project of the University of Washington, in collaboration with the U.S. Geological Survey. The kinematic rupture model is composed of high stress drop sub-events with Mw 8, similar to those observed in the Mw 9.0 Tohoku, Japan and Mw 8.8 Maule, Chile earthquakes, superimposed on large background slip with lower slip velocities. The 3D velocity model is based on active and passive-source seismic tomography studies, seismic refraction and reflection surveys, and geologic constraints. The Seattle basin portion of the model has been validated by simulating ground motions from local earthquakes. We have completed 50 3D simulations of Mw 9 earthquakes using a variety of hypocenters, slip distributions, sub-event locations, down-dip limits of rupture, and other parameters. For sites not in deep sedimentary basins, the response spectra of the synthetics for 0.1-6.0 s are similar, on average, to the values from the BC Hydro ground motion prediction equations (GMPE). For periods of 7-10 s, the synthetic response spectra exceed these GMPE, partially due to the shallow dip of the plate interface. We find large amplification factors of 2-5 for response spectra at periods of 1-10 s for locations in the Seattle and Tacoma basins, relative to sites outside the basins. This amplification depends on the direction of incoming waves and rupture directivity. The basin amplification is caused by surface waves generated at basin edges from incoming S-waves, as well as amplification and focusing of S-waves and surface waves by the 3D basin structure. The inter-event standard deviation of response spectral

  14. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  15. OpenFlow Switching Performance using Network Simulator - 3

    OpenAIRE

    Sriram Prashanth, Naguru

    2016-01-01

    Context. In the present network inventive world, there is a quick expansion of switches and protocols, which are used to cope up with the increase in customer requirement in the networking. With increasing demand for higher bandwidths and lower latency and to meet these requirements new network paths are introduced. To reduce network load in present switching network, development of new innovative switching is required. These required results can be achieved by Software Define Network or Trad...

  16. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya’an Earthquake

    Science.gov (United States)

    LI, Zhichao; CHEN, Yao; SUO, Liming

    2015-01-01

    Abstract Background In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants’ therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. Methods This paper described a field study in an earthquake-stricken area of Ya’an. A set of 3-stage follow-up data was obtained concerning with the villagers’ participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. Results First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. Conclusion This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers’ social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters. PMID:26060778

  17. Computational Approach for Improving Three-Dimensional Sub-Surface Earth Structure for Regional Earthquake Hazard Simulations in the San Francisco Bay Area

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-25

    In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, but this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.

  18. The Virtual Brain: a simulator of primate brain network dynamics.

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.

  19. The Virtual Brain: a simulator of primate brain network dynamics

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A.; Woodman, M. Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications. PMID:23781198

  20. Data Delivery Latency Improvements And First Steps Towards The Distributed Computing Of The Caltech/USGS Southern California Seismic Network Earthquake Early Warning System

    Science.gov (United States)

    Stubailo, I.; Watkins, M.; Devora, A.; Bhadha, R. J.; Hauksson, E.; Thomas, V. I.

    2016-12-01

    The USGS/Caltech Southern California Seismic Network (SCSN) is a modern digital ground motion seismic network. It develops and maintains Earthquake Early Warning (EEW) data collection and delivery systems in southern California as well as real-time EEW algorithms. Recently, Behr et al., SRL, 2016 analyzed data from several regional seismic networks deployed around the globe. They showed that the SCSN was the network with the smallest data communication delays or latency. Since then, we have reduced further the telemetry delays for many of the 330 current sites. The latency has been reduced on average from 2-6 sec to 0.4 seconds by tuning the datalogger parameters and/or deploying software upgrades. Recognizing the latency data as one of the crucial parameters in EEW, we have started archiving the per-packet latencies in mseed format for all the participating sites in a similar way it is traditionally done for the seismic waveform data. The archived latency values enable us to understand and document long-term changes in performance of the telemetry links. We can also retroactively investigate how latent the waveform data were during a specific event or during a specific time period. In addition the near-real time latency values are useful for monitoring and displaying the real-time station latency, in particular to compare different telemetry technologies. A future step to reduce the latency is to deploy the algorithms on the dataloggers at the seismic stations and transmit either the final solutions or intermediate parameters to a central processing center. To implement this approach, we are developing a stand-alone version of the OnSite algorithm to run on the dataloggers in the field. This will increase the resiliency of the SCSN to potential telemetry restrictions in the immediate aftermath of a large earthquake, either by allowing local alarming by the single station, or permitting transmission of lightweight parametric information rather than continuous

  1. Future planning: default network activity couples with frontoparietal control network and reward-processing regions during process and outcome simulations.

    Science.gov (United States)

    Gerlach, Kathy D; Spreng, R Nathan; Madore, Kevin P; Schacter, Daniel L

    2014-12-01

    We spend much of our daily lives imagining how we can reach future goals and what will happen when we attain them. Despite the prevalence of such goal-directed simulations, neuroimaging studies on planning have mainly focused on executive processes in the frontal lobe. This experiment examined the neural basis of process simulations, during which participants imagined themselves going through steps toward attaining a goal, and outcome simulations, during which participants imagined events they associated with achieving a goal. In the scanner, participants engaged in these simulation tasks and an odd/even control task. We hypothesized that process simulations would recruit default and frontoparietal control network regions, and that outcome simulations, which allow us to anticipate the affective consequences of achieving goals, would recruit default and reward-processing regions. Our analysis of brain activity that covaried with process and outcome simulations confirmed these hypotheses. A functional connectivity analysis with posterior cingulate, dorsolateral prefrontal cortex and anterior inferior parietal lobule seeds showed that their activity was correlated during process simulations and associated with a distributed network of default and frontoparietal control network regions. During outcome simulations, medial prefrontal cortex and amygdala seeds covaried together and formed a functional network with default and reward-processing regions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. The computer simulation of the resonant network for the B-factory model power supply

    International Nuclear Information System (INIS)

    Zhou, W.; Endo, K.

    1993-07-01

    A high repetition model power supply and the resonant magnet network are simulated with the computer in order to check and improve the design of the power supply for the B-factory booster. We put our key point on a transient behavior of the power supply and the resonant magnet network. The results of the simulation are given. (author)

  3. ns-2 extension to simulate localization system in wireless sensor networks

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2011-09-01

    Full Text Available The ns-2 network simulator is one of the most widely used tools by researchers to investigate the characteristics of wireless sensor networks. Academic papers focus on results and rarely include details of how ns-2 simulations are implemented...

  4. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  5. Modelisation et simulation d'un PON (Passive Optical Network) base ...

    African Journals Online (AJOL)

    English Title: Modeling and simulation of a PON (Passive Optical Network) Based on hybrid technology WDM/TDM. English Abstract. This development is part of dynamism of design for a model combining WDM and TDM multiplexing in the optical network of PON (Passive Optical Network) type, in order to satisfy the high bit ...

  6. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  7. First results from the new K2-network in Romania: Source- and site-parameters of the April 28, 1999 intermediate depth Vrancea earthquake

    International Nuclear Information System (INIS)

    Bonjer, K.-P.; Oncescu, L.; Rizescu, M.; Enescu, D.; Radulian, M.; Ionescu, C.; Moldoveanu, T.; Lungu, D.; Stempniewski, L.

    2002-01-01

    In the past five years the Collaborative Research Center 461 'Strong Earthquakes' of Karlsruhe University and the National Institute for Earth Physics, Bucharest-Magurele have installed jointly a network of 36 free-field stations in Romania. The stations are equipped with Kinemetrics K2-dataloggers, three-component episensors and GPS timing system. Most stations have velocity transducers in addition. The network is centered around the Vrancea focal zone and covers an area with a diameter of up to 500 km. Nine stations of the net are deployed in the Romanian capital Bucharest in nearly free-field conditions. Furthermore, at the Building Research Institute (INCERC) a test building and a borehole is instrumented with K2-systems. So far the top floor of a typical 10-story building is instrumented as well. The Vrancea earthquake of April 28, 1999 has been recorded by 28 stations of the new strong motion network. Although the moment magnitude was M w =5.3, no damage occurred, due to the great focal depth of 159 km. The fault-plane solution shows a nearly pure thrust mechanism (strike=171 angle, dip=53 angle, rake=106 angle), which is typical for most of the Vrancea intermediate-depth earthquakes. The strike of the B-axis is within the range of those of the background seismicity but rotated counterclockwise by about 50 angle in comparison to those of the big events. Due to the relatively dense station distribution, the lateral variation of the pattern of the peak ground motion could be well constrained. The PGA is very low (less than 5 cm/s 2 ) in Transylvania and in the mountainous areas of the Carpathians as well as in the eastern part of the Dobrogea/coastal range of the Black Sea, whereas values of around 40 cm/s 2 are found in a stripe of 80 km width, located in the outer part of the Carpathians arc and ranging from Bucharest to about 200 km towards NE. Details of the distribution in Bucharest will be discussed. (authors)

  8. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    Directory of Open Access Journals (Sweden)

    Diogo Santos

    2016-06-01

    Full Text Available Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particularly for the case of the Porto urban network, in Portugal. With this goal in mind, a comparison between different altitude information retrieval approaches is made and a simple tool to annotate network models with altitude data is proposed. The work starts by describing the methodological approach followed during research and development, then describing and analysing its main findings. This description includes an in-depth explanation of the proposed tool. Lastly, this work reviews some related work to the subject.

  9. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  10. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  11. Earthquakes Sources Parameter Estimation of 20080917 and 20081114 Near Semangko Fault, Sumatra Using Three Components of Local Waveform Recorded by IA Network Station

    Directory of Open Access Journals (Sweden)

    Madlazim

    2012-04-01

    Full Text Available The 17/09/2008 22:04:80 UTC and 14/11/2008 00:27:31.70 earthquakes near Semangko fault were analyzed to identify the fault planes. The two events were relocated to assess physical insight against the hypocenter uncertainty. The datas used to determine source parameters of both earthquakes were three components of local waveform recorded by Geofon broadband IA network stations, (MDSI, LWLI, BLSI and RBSI for the event of 17/09/2008 and (MDSI, LWLI, BLSI and KSI for the event of 14/11/2008. Distance from the epicenter to all station was less than 5°. Moment tensor solution of two events was simultaneously analyzed by determination of the centroid position. Simultaneous analysis covered hypocenter position, centroid position and nodal planes of two events indicated Semangko fault planes. Considering that the Semangko fault zone is a high seismicity area, the identification of the seismic fault is important for the seismic hazard investigation in the region.

  12. Analysis simulation of tectonic earthquake impact to the lifetime of radioactive waste container and equivalent dose rate predication in Yucca Mountain geologic repository, Nevada test site, USA

    International Nuclear Information System (INIS)

    Ko, I.S.; Imardjoko, Y.U.; Karnawati, Dwikorita

    2003-01-01

    US policy not to recycle her spent nuclear fuels brings consequence to provide a nuclear waste repository site Yucca Mountain in Nevada, USA, considered the proper one. High-level radioactive waste to be placed into containers and then will be buried in three hundred meter underground tunnels. Tectonic earthquake is the main factor causing container's damage. Goldsim version 6.04.007 simulates mechanism of container's damage due to a great devastating impact load, the collapse of the tunnels. Radionuclide inventories included are U-234, C-14, Tc-99, I-129, Se-79, Pa-231, Np-237, Pu-242, and Pu-239. Simulation carried out in 100,000 years time span. The research goals are: 1). Estimating tunnels stan-up time, and 2). Predicting the equivalent dose rate contributed by the included radionuclides to the human due to radioactive polluted drinking water intake. (author)

  13. Environmental regulation in a network of simulated microbial ecosystems.

    Science.gov (United States)

    Williams, Hywel T P; Lenton, Timothy M

    2008-07-29

    The Earth possesses a number of regulatory feedback mechanisms involving life. In the absence of a population of competing biospheres, it has proved hard to find a robust evolutionary mechanism that would generate environmental regulation. It has been suggested that regulation must require altruistic environmental alterations by organisms and, therefore, would be evolutionarily unstable. This need not be the case if organisms alter the environment as a selectively neutral by-product of their metabolism, as in the majority of biogeochemical reactions, but a question then arises: Why should the combined by-product effects of the biota have a stabilizing, rather than destabilizing, influence on the environment? Under certain conditions, selection acting above the level of the individual can be an effective adaptive force. Here we present an evolutionary simulation model in which environmental regulation involving higher-level selection robustly emerges in a network of interconnected microbial ecosystems. Spatial structure creates conditions for a limited form of higher-level selection to act on the collective environment-altering properties of local communities. Local communities that improve their environmental conditions achieve larger populations and are better colonizers of available space, whereas local communities that degrade their environment shrink and become susceptible to invasion. The spread of environment-improving communities alters the global environment toward the optimal conditions for growth and tends to regulate against external perturbations. This work suggests a mechanism for environmental regulation that is consistent with evolutionary theory.

  14. BWR-plant simulator and its neural network companion with programming under mat lab environment

    International Nuclear Information System (INIS)

    Ghenniwa, Fatma Suleiman

    2008-01-01

    Stand alone nuclear power plant simulators, as well as building blocks based nuclear power simulator are available from different companies throughout the world. In this work, a review of such simulators has been explored for both types. Also a survey of the possible authoring tools for such simulators development has been performed. It is decided, in this research, to develop prototype simulator based on components building blocks. Further more, the authoring tool (Mat lab software) has been selected for programming. It has all the basic tools required for the simulator development similar to that developed by specialized companies for simulator like MMS, APROS and others. Components simulations, as well as integrated components for power plant simulation have been demonstrated. Preliminary neural network reactor model as part of a prepared neural network modules library has been used to demonstrate module order shuffling during simulation. The developed components library can be refined and extended for further development. (author)

  15. Developing an Agent-Based Simulation System for Post-Earthquake Operations in Uncertainty Conditions: A Proposed Method for Collaboration among Agents

    Directory of Open Access Journals (Sweden)

    Navid Hooshangi

    2018-01-01

    Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.

  16. A Hybrid Communications Network Simulation-Independent Toolkit

    National Research Council Canada - National Science Library

    Dines, David M

    2008-01-01

    .... Evolving a grand design of the enabling network will require a flexible evaluation platform to try and select the right combination of network strategies and protocols in the realms of topology control and routing...

  17. 'BioNessie(G) - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X; Jiang, J; Ajayi, O; Gu, X; Gilbert, D; Sinnott, R

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scal...

  18. Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks

    DEFF Research Database (Denmark)

    Yan, Ying; Dittmann, Lars

    2011-01-01

    In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....

  19. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    NARCIS (Netherlands)

    Halkes, G.P.; Langendoen, K.G.

    2010-01-01

    The evaluation ofMAC protocols forWireless Sensor Networks (WSNs) is often performed through simulation. These simulations necessarily abstract away from reality inmany ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover,

  20. How Crime Spreads Through Imitation in Social Networks: A Simulation Model

    Science.gov (United States)

    Punzo, Valentina

    In this chapter an agent-based model for investigating how crime spreads through social networks is presented. Some theoretical issues related to the sociological explanation of crime are tested through simulation. The agent-based simulation allows us to investigate the relative impact of some mechanisms of social influence on crime, within a set of controlled simulated experiments.

  1. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  2. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  3. NCC simulation model. Phase 2: Simulating the operations of the Network Control Center and NCC message manual

    Science.gov (United States)

    Benjamin, Norman M.; Gill, Tepper; Charles, Mary

    1994-01-01

    The network control center (NCC) provides scheduling, monitoring, and control of services to the NASA space network. The space network provides tracking and data acquisition services to many low-earth orbiting spacecraft. This report describes the second phase in the development of simulation models for the FCC. Phase one concentrated on the computer systems and interconnecting network.Phase two focuses on the implementation of the network message dialogs and the resources controlled by the NCC. Performance measures were developed along with selected indicators of the NCC's operational effectiveness.The NCC performance indicators were defined in terms of the following: (1) transfer rate, (2) network delay, (3) channel establishment time, (4) line turn around time, (5) availability, (6) reliability, (7) accuracy, (8) maintainability, and (9) security. An NCC internal and external message manual is appended to this report.

  4. Sensitivity of broad-band ground-motion simulations to earthquake source and Earth structure variations: an application to the Messina Straits (Italy)

    KAUST Repository

    Imperatori, W.

    2012-03-01

    In this paper, we investigate ground-motion variability due to different faulting approximations and crustal-model parametrizations in the Messina Straits area (Southern Italy). Considering three 1-D velocity models proposed for this region and a total of 72 different source realizations, we compute broad-band (0-10 Hz) synthetics for Mw 7.0 events using a fault plane geometry recently proposed. We explore source complexity in terms of classic kinematic (constant rise-time and rupture speed) and pseudo-dynamic models (variable rise-time and rupture speed). Heterogeneous slip distributions are generated using a Von Karman autocorrelation function. Rise-time variability is related to slip, whereas rupture speed variations are connected to static stress drop. Boxcar, triangle and modified Yoffe are the adopted source time functions. We find that ground-motion variability associated to differences in crustal models is constant and becomes important at intermediate and long periods. On the other hand, source-induced ground-motion variability is negligible at long periods and strong at intermediate-short periods. Using our source-modelling approach and the three different 1-D structural models, we investigate shaking levels for the 1908 Mw 7.1 Messina earthquake adopting a recently proposed model for fault geometry and final slip. Our simulations suggest that peak levels in Messina and Reggio Calabria must have reached 0.6-0.7 g during this earthquake.

  5. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    Directory of Open Access Journals (Sweden)

    Jan Hahne

    2017-05-01

    Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  6. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  7. Earthquake Early Warning: Real-time Testing of an On-site Method Using Waveform Data from the Southern California Seismic Network

    Science.gov (United States)

    Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.

    2007-12-01

    We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the

  8. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Science.gov (United States)

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  9. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  10. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  11. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  12. Tsunami simulation of 2011 Tohoku-Oki Earthquake. Evaluation of difference in tsunami wave pressure acting around Fukushima Daiichi Nuclear Power Station and Fukushima Daini Nuclear Power Station among different tsunami source models

    International Nuclear Information System (INIS)

    Fujihara, Satoru; Hashimoto, Norihiko; Korenaga, Mariko; Tamiya, Takahiro

    2016-01-01

    Since the 2011 Tohoku-Oki Earthquake, evaluations based on a tsunami simulation approach have had a very important role in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes. When considering tsunami disaster prevention measures based on the knowledge obtained from tsunami simulations, it is important to carefully examine the type of tsunami source model. In current tsunami simulations, there are various ways to set the tsunami source model, and a considerable difference in tsunami behavior can be expected among the tsunami source models. In this study, we carry out a tsunami simulation of the 2011 Tohoku-Oki Earthquake around Fukushima Daiichi (I) Nuclear Power Plant and Fukushima Daini (II) Nuclear Power Plant in Fukushima Prefecture, Japan, using several tsunami source models, and evaluate the difference in the tsunami behavior in the tsunami inundation process. The results show that for an incoming tsunami inundating an inland region, there are considerable relative differences in the maximum tsunami height and wave pressure. This suggests that there could be false information used in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes, depending on the tsunami source model. (author)

  13. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Schücker, Jannis; Eppler, Jochen Martin

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  14. Enterprise Networks for Competences Exchange: A Simulation Model

    Science.gov (United States)

    Remondino, Marco; Pironti, Marco; Pisano, Paola

    A business process is a set of logically related tasks performed to achieve a defined business and related to improving organizational processes. Process innovation can happen at various levels: incrementally, redesign of existing processes, new processes. The knowledge behind process innovation can be shared, acquired, changed and increased by the enterprises inside a network. An enterprise can decide to exploit innovative processes it owns, thus potentially gaining competitive advantage, but risking, in turn, that other players could reach the same technological levels. Or it could decide to share it, in exchange for other competencies or money. These activities could be the basis for a network formation and/or impact the topology of an existing network. In this work an agent based model is introduced (E3), aiming to explore how a process innovation can facilitate network formation, affect its topology, induce new players to enter the market and spread onto the network by being shared or developed by new players.

  15. Evaluation and Simulation of Common Video Conference Traffics in Communication Networks

    Directory of Open Access Journals (Sweden)

    Farhad faghani

    2014-01-01

    Full Text Available Multimedia traffics are the basic traffics in data communication networks. Especially Video conferences are the most desirable traffics in huge networks(wired, wireless, …. Traffic modeling can help us to evaluate the real networks. So, in order to have good services in data communication networks which provide multimedia services, QoS will be very important .In this research we tried to have an exact traffic model design and simulation to overcome QoS challenges. Also, we predict bandwidth by Kalman filter in Ethernet networks.

  16. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-01

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies. PMID:29316614

  17. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  18. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  19. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Directory of Open Access Journals (Sweden)

    Wen Zhou

    Full Text Available As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  20. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Science.gov (United States)

    Zhou, Wen; Koptyug, Nikita; Ye, Shutao; Jia, Yifan; Lu, Xiaolong

    2016-01-01

    As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  1. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    Science.gov (United States)

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  2. Impact of stoichiometry representation on simulation of genotype-phenotype relationships in metabolic networks

    DEFF Research Database (Denmark)

    Brochado, Ana Rita; Andrejev, Sergej; Maranas, Costas D.

    2012-01-01

    the formulation of the desired objective functions, by casting objective functions using metabolite turnovers rather than fluxes. By simulating perturbed metabolic networks, we demonstrate that the use of stoichiometry representation independent algorithms is fundamental for unambiguously linking modeling results...

  3. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    Science.gov (United States)

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new

  4. Earthquake Catalogue of the Caucasus

    Science.gov (United States)

    Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.

    2016-12-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude

  5. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  6. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  7. Development of neural network simulating power distribution of a BWR fuel bundle

    International Nuclear Information System (INIS)

    Tanabe, A.; Yamamoto, T.; Shinfuku, K.; Nakamae, T.

    1992-01-01

    A neural network model is developed to simulate the precise nuclear physics analysis program code for quick scoping survey calculations. The relation between enrichment and local power distribution of BWR fuel bundles was learned using two layers neural network (ENET). A new model is to introduce burnable neutron absorber (Gadolinia), added to several fuel rods to decrease initial reactivity of fresh bundle. The 2nd stages three layers neural network (GNET) is added on the 1st stage network ENET. GNET studies the local distribution difference caused by Gadolinia. Using this method, it becomes possible to survey of the gradients of sigmoid functions and back propagation constants with reasonable time. Using 99 learning patterns of zero burnup, good error convergence curve is obtained after many trials. This neural network model is able to simulate no learned cases fairly as well as the learned cases. Computer time of this neural network model is about 100 times faster than a precise analysis model. (author)

  8. An introduction to network modeling and simulation for the practicing engineer

    CERN Document Server

    Burbank, Jack; Ward, Jon

    2011-01-01

    This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.

  9. Discrimination of Cylinders with Different Wall Thicknesses using Neural Networks and Simulated Dolphin Sonar Signals

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whitlow; Larsen, Jan

    1999-01-01

    This paper describes a method integrating neural networks into a system for recognizing underwater objects. The system is based on a combination of simulated dolphin sonar signals, simulated auditory filters and artificial neural networks. The system is tested on a cylinder wall thickness...... difference experiment and demonstrates high accuracy for small wall thickness differences. Results from the experiment are compared with results obtained by a false killer whale (pseudorca crassidens)....

  10. On the agreement between small-world-like OFC model and real earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)

    2015-03-20

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.

  11. On the agreement between small-world-like OFC model and real earthquakes

    International Nuclear Information System (INIS)

    Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo

    2015-01-01

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places

  12. Simulation of noise-assisted transport via optical cavity networks

    International Nuclear Information System (INIS)

    Caruso, Filippo; Plenio, Martin B.; Spagnolo, Nicolo; Vitelli, Chiara; Sciarrino, Fabio

    2011-01-01

    Recently, the presence of noise has been found to play a key role in assisting the transport of energy and information in complex quantum networks and even in biomolecular systems. Here we propose an experimentally realizable optical network scheme for the demonstration of the basic mechanisms underlying noise-assisted transport. The proposed system consists of a network of coupled quantum-optical cavities, injected with a single photon, whose transmission efficiency can be measured. Introducing dephasing in the photon path, this system exhibits a characteristic enhancement of the transport efficiency that can be observed with presently available technology.

  13. Smart Grid: Network simulator for smart grid test-bed

    International Nuclear Information System (INIS)

    Lai, L C; Ong, H S; Che, Y X; Do, N Q; Ong, X J

    2013-01-01

    Smart Grid become more popular, a smaller scale of smart grid test-bed is set up at UNITEN to investigate the performance and to find out future enhancement of smart grid in Malaysia. The fundamental requirement in this project is design a network with low delay, no packet drop and with high data rate. Different type of traffic has its own characteristic and is suitable for different type of network and requirement. However no one understands the natural of traffic in smart grid. This paper presents the comparison between different types of traffic to find out the most suitable traffic for the optimal network performance.

  14. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  15. Importance Sampling Simulation of Population Overflow in Two-node Tandem Networks

    NARCIS (Netherlands)

    Nicola, V.F.; Zaburnenko, T.S.; Baier, C; Chiola, G.; Smirni, E.

    2005-01-01

    In this paper we consider the application of importance sampling in simulations of Markovian tandem networks in order to estimate the probability of rare events, such as network population overflow. We propose a heuristic methodology to obtain a good approximation to the 'optimal' state-dependent

  16. Simulated epidemics in an empirical spatiotemporal network of 50,185 sexual contacts.

    Directory of Open Access Journals (Sweden)

    Luis E C Rocha

    2011-03-01

    Full Text Available Sexual contact patterns, both in their temporal and network structure, can influence the spread of sexually transmitted infections (STI. Most previous literature has focused on effects of network topology; few studies have addressed the role of temporal structure. We simulate disease spread using SI and SIR models on an empirical temporal network of sexual contacts in high-end prostitution. We compare these results with several other approaches, including randomization of the data, classic mean-field approaches, and static network simulations. We observe that epidemic dynamics in this contact structure have well-defined, rather high epidemic thresholds. Temporal effects create a broad distribution of outbreak sizes, even if the per-contact transmission probability is taken to its hypothetical maximum of 100%. In general, we conclude that the temporal correlations of our network accelerate outbreaks, especially in the early phase of the epidemics, while the network topology (apart from the contact-rate distribution slows them down. We find that the temporal correlations of sexual contacts can significantly change simulated outbreaks in a large empirical sexual network. Thus, temporal structures are needed alongside network topology to fully understand the spread of STIs. On a side note, our simulations further suggest that the specific type of commercial sex we investigate is not a reservoir of major importance for HIV.

  17. Transport link scanner: simulating geographic transport network expansion through individual investments

    NARCIS (Netherlands)

    Koopmans, C.C.; Jacobs, C.G.W.

    2016-01-01

    This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness

  18. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  19. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  20. A minimalist model of characteristic earthquakes

    DEFF Research Database (Denmark)

    Vázquez-Prada, M.; González, Á.; Gómez, J.B.

    2002-01-01

    In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....

  1. Developing Simulated Cyber Attack Scenarios Against Virtualized Adversary Networks

    Science.gov (United States)

    2017-03-01

    enclave, as shown in Figure 11, is a common design for many secure networks. Different variations of a cyber-attack scenario can be rehearsed based...achieved a greater degree of success against multiple variations of an enemy network. E. ATTACK TYPES A primary goal of this thesis is to define and...2013. [33] R. Goldberg , “Architectural principles for virtual computer systems,” Ph.D. dissertation, Dept. of Comp. Sci., Harvard Univ., Cambridge

  2. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  3. Digitalization and networking of analog simulators and portal images

    Energy Technology Data Exchange (ETDEWEB)

    Pesznyak, C.; Zarand, P.; Mayer, A. [Uzsoki Hospital, Budapest (Hungary). Inst. of Oncoradiology

    2007-03-15

    Background: Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R and V (Record and Verify) system is required. Material and Methods: Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Results: Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. Conclusion: The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R and V system. (orig.)

  4. Digitalization and networking of analog simulators and portal images.

    Science.gov (United States)

    Pesznyák, Csilla; Zaránd, Pál; Mayer, Arpád

    2007-03-01

    Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R&V (Record & Verify) system is required. Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R&V system.

  5. On the plant operators performance during earthquake

    International Nuclear Information System (INIS)

    Kitada, Y.; Yoshimura, S.; Abe, M.; Niwa, H.; Yoneda, T.; Matsunaga, M.; Suzuki, T.

    1994-01-01

    There is little data on which to judge the performance of plant operators during and after strong earthquakes. In order to obtain such data to enhance the reliability on the plant operation, a Japanese utility and a power plant manufacturer carried out a vibration test using a shaking table. The purpose of the test was to investigate operator performance, i.e., the quickness and correctness in switch handling and panel meter read-out. The movement of chairs during earthquake as also of interest, because if the chairs moved significantly or turned over during a strong earthquake, some arresting mechanism would be required for the chair. Although there were differences between the simulated earthquake motions used and actual earthquakes mainly due to the specifications of the shaking table, the earthquake motions had almost no influence on the operators of their capability (performance) for operating the simulated console and the personal computers

  6. The Dynamic Behavior of a Network of Pipelines and Liquefaction of Soil Caused by the Earthquake Acceleration

    Directory of Open Access Journals (Sweden)

    Alireza Mirza Goltabar Roshan

    2015-09-01

    Full Text Available Risk analysis pipelines in the quake as one of the most vital arteries in the current circumstances in the world is of special importance. in our everyday activities, used to underground structures such as pipes, tunnels, wells and so on for services such as transporting water, transportation, irrigation, drainage, sewage disposal, transporting oil and gas, carrying acid waste, industrial, household and so on. With regard to the huge investments structures, especially buried underground pipes, we need to study these constructs in response to the earthquake, is clearly felt. Pipelines used for transporting gas and other fluids, are widely distributed in all areas. These lines due to passing through the densely populated areas are always buried in the earth. Seismic behavior of these pipes as a result of the interaction between the soil and the pipe is different from the above-ground structures. The manner of modelling of the effects of soil liquefaction on the pipes in this thesis is that two shear springs and a normal spring is defined between soil and the pipe that in liquefaction mode minimize the friction shear strength.

  7. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  8. Fracture Simulation of Highly Crosslinked Polymer Networks: Triglyceride-Based Adhesives

    Science.gov (United States)

    Lorenz, Christian; Stevens, Mark; Wool, Richard

    2003-03-01

    The ACRES program at the U. of Delaware has shown that triglyceride oils derived from plants are a favorable alternative to the traditional adhesives. The triglyceride networks are formed from an initial mixture of styrene monomers, free-radical initiators and triglycerides. We have performed simulations to study the effect of physical composition and physical characteristics of the triglyceride network on the strength of triglyceride network. A coarse-grained, bead-spring model of the triglyceride system is used. The average triglyceride consists of 6 beads per chain, the styrenes are represented as a single bead and the initiators are two bead chains. The polymer network is formed using an off-lattice 3D Monte Carlo simulation, in which the initiators activate the styrene and triglyceride reactive sites and then bonds are randomly formed between the styrene and active triglyceride monomers producing a highly crosslinked polymer network. Molecular dynamics simulations of the network under tensile and shear strains were performed to determine the strength as a function of the network composition. The relationship between the network structure and its strength will also be discussed.

  9. Modeling and simulating the adaptive electrical properties of stochastic polymeric 3D networks

    International Nuclear Information System (INIS)

    Sigala, R; Smerieri, A; Camorani, P; Schüz, A; Erokhin, V

    2013-01-01

    Memristors are passive two-terminal circuit elements that combine resistance and memory. Although in theory memristors are a very promising approach to fabricate hardware with adaptive properties, there are only very few implementations able to show their basic properties. We recently developed stochastic polymeric matrices with a functionality that evidences the formation of self-assembled three-dimensional (3D) networks of memristors. We demonstrated that those networks show the typical hysteretic behavior observed in the ‘one input-one output’ memristive configuration. Interestingly, using different protocols to electrically stimulate the networks, we also observed that their adaptive properties are similar to those present in the nervous system. Here, we model and simulate the electrical properties of these self-assembled polymeric networks of memristors, the topology of which is defined stochastically. First, we show that the model recreates the hysteretic behavior observed in the real experiments. Second, we demonstrate that the networks modeled indeed have a 3D instead of a planar functionality. Finally, we show that the adaptive properties of the networks depend on their connectivity pattern. Our model was able to replicate fundamental qualitative behavior of the real organic 3D memristor networks; yet, through the simulations, we also explored other interesting properties, such as the relation between connectivity patterns and adaptive properties. Our model and simulations represent an interesting tool to understand the very complex behavior of self-assembled memristor networks, which can finally help to predict and formulate hypotheses for future experiments. (paper)

  10. Overview of DOS attacks on wireless sensor networks and experimental results for simulation of interference attacks

    Directory of Open Access Journals (Sweden)

    Željko Gavrić

    2018-01-01

    Full Text Available Wireless sensor networks are now used in various fields. The information transmitted in the wireless sensor networks is very sensitive, so the security issue is very important. DOS (denial of service attacks are a fundamental threat to the functioning of wireless sensor networks. This paper describes some of the most common DOS attacks and potential methods of protection against t