WorldWideScience

Sample records for earthquake simulation network

  1. GEM Plate Boundary Simulations for the Plate Boundary Observatory: A Program for Understanding the Physics of Earthquakes on Complex Fault Networks via Observations, Theory and Numerical Simulation

    Science.gov (United States)

    Rundle, J. B.; Rundle, P. B.; Klein, W.; de sa Martins, J.; Tiampo, K. F.; Donnellan, A.; Kellogg, L. H.

    The last five years have seen unprecedented growth in the amount and quality of geodetic data collected to characterize crustal deformation in earthquake-prone areas such as California and Japan. The installation of the Southern California Integrated Geodetic Network (SCIGN) and the Bay Area Regional Deformation (BARD) network are two examples. As part of the recently proposed Earthscope NSF/GEO/EAR/MRE initiative, the Plate Boundary Observatory (PBO) plans to place more than a thousand GPS, strainmeters, and deformation sensors along the active plate boundary of the western coast of the United States, Mexico and Canada (http://www.earthscope.org/pbo.com.html). The scientific goals of PBO include understanding how tectonic plates interact, together with an emphasis on understanding the physics of earthquakes. However, the problem of understanding the physics of earthquakes on complex fault networks through observations alone is complicated by our inability to study the problem in a manner familiar to laboratory scientists, by means of controlled, fully reproducible experiments. We have therefore been motivated to construct a numerical simulation technology that will allow us to study earthquake physics via numerical experiments. To be considered successful, the simulations must not only produce observables that are maximally similar to those seen by the PBO and other observing programs, but in addition the simulations must provide dynamical predictions that can be falsified by means of observations on the real fault networks. In general, the dynamical behavior of earthquakes on complex fault networks is a result of the interplay between the geometric structure of the fault network and the physics of the frictional sliding process. In constructing numerical simulations of a complex fault network, we will need to solve a variety of problems, including the development of analysis techniques (also called data mining), data assimilation, space-time pattern definition

  2. Simulated earthquake ground motions

    International Nuclear Information System (INIS)

    Vanmarcke, E.H.; Gasparini, D.A.

    1977-01-01

    The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra

  3. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  4. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  5. Octree-based Global Earthquake Simulations

    Science.gov (United States)

    Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.

    2017-12-01

    Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.

  6. Learning from physics-based earthquake simulators: a minimal approach

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  7. Preferential attachment in evolutionary earthquake networks

    Science.gov (United States)

    Rezaei, Soghra; Moghaddasi, Hanieh; Darooneh, Amir Hossein

    2018-04-01

    Earthquakes as spatio-temporal complex systems have been recently studied using complex network theory. Seismic networks are dynamical networks due to addition of new seismic events over time leading to establishing new nodes and links to the network. Here we have constructed Iran and Italy seismic networks based on Hybrid Model and testified the preferential attachment hypothesis for the connection of new nodes which states that it is more probable for newly added nodes to join the highly connected nodes comparing to the less connected ones. We showed that the preferential attachment is present in the case of earthquakes network and the attachment rate has a linear relationship with node degree. We have also found the seismic passive points, the most probable points to be influenced by other seismic places, using their preferential attachment values.

  8. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  9. Mesh network simulation

    OpenAIRE

    Pei, Ping; Petrenko, Y. N.

    2015-01-01

    A Mesh network simulation framework which provides a powerful and concise modeling chain for a network structure will be introduce in this report. Mesh networks has a special topologic structure. The paper investigates a message transfer in wireless mesh network simulation and how does it works in cellular network simulation. Finally the experimental result gave us the information that mesh networks have different principle in transmission way with cellular networks in transmission, and multi...

  10. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  11. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  12. Complex networks of earthquakes and aftershocks

    Directory of Open Access Journals (Sweden)

    M. Baiesi

    2005-01-01

    Full Text Available We invoke a metric to quantify the correlation between any two earthquakes. This provides a simple and straightforward alternative to using space-time windows to detect aftershock sequences and obviates the need to distinguish main shocks from aftershocks. Directed networks of earthquakes are constructed by placing a link, directed from the past to the future, between pairs of events that are strongly correlated. Each link has a weight giving the relative strength of correlation such that the sum over the incoming links to any node equals unity for aftershocks, or zero if the event had no correlated predecessors. A correlation threshold is set to drastically reduce the size of the data set without losing significant information. Events can be aftershocks of many previous events, and also generate many aftershocks. The probability distribution for the number of incoming and outgoing links are both scale free, and the networks are highly clustered. The Omori law holds for aftershock rates up to a decorrelation time that scales with the magnitude, m, of the initiating shock as tcutoff~10β m with β~-3/4. Another scaling law relates distances between earthquakes and their aftershocks to the magnitude of the initiating shock. Our results are inconsistent with the hypothesis of finite aftershock zones. We also find evidence that seismicity is dominantly triggered by small earthquakes. Our approach, using concepts from the modern theory of complex networks, together with a metric to estimate correlations, opens up new avenues of research, as well as new tools to understand seismicity.

  13. Simulation of rockfalls triggered by earthquakes

    Science.gov (United States)

    Kobayashi, Y.; Harp, E.L.; Kagawa, T.

    1990-01-01

    A computer program to simulate the downslope movement of boulders in rolling or bouncing modes has been developed and applied to actual rockfalls triggered by the Mammoth Lakes, California, earthquake sequence in 1980 and the Central Idaho earthquake in 1983. In order to reproduce a movement mode where bouncing predominated, we introduced an artificial unevenness to the slope surface by adding a small random number to the interpolated value of the mid-points between the adjacent surveyed points. Three hundred simulations were computed for each site by changing the random number series, which determined distances and bouncing intervals. The movement of the boulders was, in general, rather erratic depending on the random numbers employed, and the results could not be seen as deterministic but stochastic. The closest agreement between calculated and actual movements was obtained at the site with the most detailed and accurate topographic measurements. ?? 1990 Springer-Verlag.

  14. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  15. Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.

    1997-01-01

    This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....

  16. CAISSON: Interconnect Network Simulator

    Science.gov (United States)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  17. Progress in Computational Simulation of Earthquakes

    Science.gov (United States)

    Donnellan, Andrea; Parker, Jay; Lyzenga, Gregory; Judd, Michele; Li, P. Peggy; Norton, Charles; Tisdale, Edwin; Granat, Robert

    2006-01-01

    GeoFEST(P) is a computer program written for use in the QuakeSim project, which is devoted to development and improvement of means of computational simulation of earthquakes. GeoFEST(P) models interacting earthquake fault systems from the fault-nucleation to the tectonic scale. The development of GeoFEST( P) has involved coupling of two programs: GeoFEST and the Pyramid Adaptive Mesh Refinement Library. GeoFEST is a message-passing-interface-parallel code that utilizes a finite-element technique to simulate evolution of stress, fault slip, and plastic/elastic deformation in realistic materials like those of faulted regions of the crust of the Earth. The products of such simulations are synthetic observable time-dependent surface deformations on time scales from days to decades. Pyramid Adaptive Mesh Refinement Library is a software library that facilitates the generation of computational meshes for solving physical problems. In an application of GeoFEST(P), a computational grid can be dynamically adapted as stress grows on a fault. Simulations on workstations using a few tens of thousands of stress and displacement finite elements can now be expanded to multiple millions of elements with greater than 98-percent scaled efficiency on over many hundreds of parallel processors (see figure).

  18. Interactive Visualization to Advance Earthquake Simulation

    Science.gov (United States)

    Kellogg, Louise H.; Bawden, Gerald W.; Bernardin, Tony; Billen, Magali; Cowgill, Eric; Hamann, Bernd; Jadamec, Margarete; Kreylos, Oliver; Staadt, Oliver; Sumner, Dawn

    2008-04-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth’s surface and interior. Virtual mapping tools allow virtual “field studies” in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method’s strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations.

  19. Interactive visualization to advance earthquake simulation

    Science.gov (United States)

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  20. A note on adding viscoelasticity to earthquake simulators

    Science.gov (United States)

    Pollitz, Fred

    2017-01-01

    Here, I describe how time‐dependent quasi‐static stress transfer can be implemented in an earthquake simulator code that is used to generate long synthetic seismicity catalogs. Most existing seismicity simulators use precomputed static stress interaction coefficients to rapidly implement static stress transfer in fault networks with typically tens of thousands of fault patches. The extension to quasi‐static deformation, which accounts for viscoelasticity of Earth’s ductile lower crust and mantle, involves the precomputation of additional interaction coefficients that represent time‐dependent stress transfer among the model fault patches, combined with defining and evolving additional state variables that track this stress transfer. The new approach is illustrated with application to a California‐wide synthetic fault network.

  1. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  2. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    Science.gov (United States)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  3. Evaluation and optimization of seismic networks and algorithms for earthquake early warning – the case of Istanbul (Turkey)

    OpenAIRE

    Oth, Adrien; Böse, Maren; Wenzel, Friedemann; Köhler, Nina; Erdik, Mustafa

    2010-01-01

    Earthquake early warning (EEW) systems should provide reliable warnings as quickly as possible with a minimum number of false and missed alarms. Using the example of the megacity Istanbul and based on a set of simulated scenario earthquakes, we present a novel approach for evaluating and optimizing seismic networks for EEW, in particular in regions with a scarce number of instrumentally recorded earthquakes. We show that, while the current station locations of the existing Istanbul EEW system...

  4. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  5. A simulation of earthquake induced undrained pore pressure ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 112; Issue 3. A simulation of earthquake induced undrained pore pressure changes with bearing on some soil liquefaction observations following the 2001 Bhuj earthquake. Irene Sarkar Ramesh Chander. Volume 112 Issue 3 September 2003 pp 471-477 ...

  6. Non-universal critical exponents in earthquake complex networks

    Science.gov (United States)

    Pastén, Denisse; Torres, Felipe; Toledo, Benjamín A.; Muñoz, Víctor; Rogan, José; Valdivia, Juan Alejandro

    2018-02-01

    The problem of universality of critical exponents in complex networks is studied based on networks built from seismic data sets. Using two data sets corresponding to Chilean seismicity (northern zone, including the 2014 Mw = 8 . 2 earthquake in Iquique; and central zone without major earthquakes), directed networks for each set are constructed. Connectivity and betweenness centrality distributions are calculated and found to be scale-free, with respective exponents γ and δ. The expected relation between both characteristic exponents, δ >(γ + 1) / 2, is verified for both data sets. However, unlike the expectation for certain scale-free analytical complex networks, the value of δ is found to be non-universal.

  7. Social Media as Seismic Networks for the Earthquake Damage Assessment

    Science.gov (United States)

    Meletti, C.; Cresci, S.; La Polla, M. N.; Marchetti, A.; Tesconi, M.

    2014-12-01

    The growing popularity of online platforms, based on user-generated content, is gradually creating a digital world that mirrors the physical world. In the paradigm of crowdsensing, the crowd becomes a distributed network of sensors that allows us to understand real life events at a quasi-real-time rate. The SoS-Social Sensing project [http://socialsensing.it/] exploits the opportunistic crowdsensing, involving users in the sensing process in a minimal way, for social media emergency management purposes in order to obtain a very fast, but still reliable, detection of emergency dimension to face. First of all we designed and implemented a decision support system for the detection and the damage assessment of earthquakes. Our system exploits the messages shared in real-time on Twitter. In the detection phase, data mining and natural language processing techniques are firstly adopted to select meaningful and comprehensive sets of tweets. Then we applied a burst detection algorithm in order to promptly identify outbreaking seismic events. Using georeferenced tweets and reported locality names, a rough epicentral determination is also possible. The results, compared to Italian INGV official reports, show that the system is able to detect, within seconds, events of a magnitude in the region of 3.5 with a precision of 75% and a recall of 81,82%. We then focused our attention on damage assessment phase. We investigated the possibility to exploit social media data to estimate earthquake intensity. We designed a set of predictive linear models and evaluated their ability to map the intensity of worldwide earthquakes. The models build on a dataset of almost 5 million tweets exploited to compute our earthquake features, and more than 7,000 globally distributed earthquakes data, acquired in a semi-automatic way from USGS, serving as ground truth. We extracted 45 distinct features falling into four categories: profile, tweet, time and linguistic. We run diagnostic tests and

  8. US earthquake observatories: recommendations for a new national network

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    This report is the first attempt by the seismological community to rationalize and optimize the distribution of earthquake observatories across the United States. The main aim is to increase significantly our knowledge of earthquakes and the earth's dynamics by providing access to scientifically more valuable data. Other objectives are to provide a more efficient and cost-effective system of recording and distributing earthquake data and to make as uniform as possible the recording of earthquakes in all states. The central recommendation of the Panel is that the guiding concept be established of a rationalized and integrated seismograph system consisting of regional seismograph networks run for crucial regional research and monitoring purposes in tandem with a carefully designed, but sparser, nationwide network of technologically advanced observatories. Such a national system must be thought of not only in terms of instrumentation but equally in terms of data storage, computer processing, and record availability.

  9. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...

  10. Discrimination between earthquakes and chemical explosions using artificial neural networks

    International Nuclear Information System (INIS)

    Kundu, Ajit; Bhadauria, Y.S.; Roy, Falguni

    2012-05-01

    An Artificial Neural Network (ANN) for discriminating between earthquakes and chemical explosions located at epicentral distances, Δ <5 deg from Gauribidanur Array (GBA) has been developed using the short period digital seismograms recorded at GBA. For training the ANN spectral amplitude ratios between P and Lg phases computed at 13 different frequencies in the frequency range of 2-8 Hz, corresponding to 20 earthquakes and 23 chemical explosions were used along with other parameters like magnitude, epicentral distance and amplitude ratios Rg/P and Rg/Lg. After training and development, the ANN has correctly identified a set of 21 test events, comprising 6 earthquakes and 15 chemical explosions. (author)

  11. Simulated Associating Polymer Networks

    Science.gov (United States)

    Billen, Joris

    Telechelic associating polymer networks consist of polymer chains terminated by endgroups that have a different chemical composition than the polymer backbone. When dissolved in a solution, the endgroups cluster together to form aggregates. At low temperature, a strongly connected reversible network is formed and the system behaves like a gel. Telechelic networks are of interest since they are representative for biopolymer networks (e.g. F-actin) and are widely used in medical applications (e.g. hydrogels for tissue engineering, wound dressings) and consumer products (e.g. contact lenses, paint thickeners). In this thesis such systems are studied by means of a molecular dynamics/Monte Carlo simulation. At first, the system in rest is studied by means of graph theory. The changes in network topology upon cooling to the gel state, are characterized. Hereto an extensive study of the eigenvalue spectrum of the gel network is performed. As a result, an in-depth investigation of the eigenvalue spectra for spatial ER, scale-free, and small-world networks is carried out. Next, the gel under the application of a constant shear is studied, with a focus on shear banding and the changes in topology under shear. Finally, the relation between the gel transition and percolation is discussed.

  12. Preliminary Results from SCEC Earthquake Simulator Comparison Project

    Science.gov (United States)

    Tullis, T. E.; Barall, M.; Richards-Dinger, K. B.; Ward, S. N.; Heien, E.; Zielke, O.; Pollitz, F. F.; Dieterich, J. H.; Rundle, J. B.; Yikilmaz, M. B.; Turcotte, D. L.; Kellogg, L. H.; Field, E. H.

    2010-12-01

    Earthquake simulators are computer programs that simulate long sequences of earthquakes. If such simulators could be shown to produce synthetic earthquake histories that are good approximations to actual earthquake histories they could be of great value in helping to anticipate the probabilities of future earthquakes and so could play an important role in helping to make public policy decisions. Consequently it is important to discover how realistic are the earthquake histories that result from these simulators. One way to do this is to compare their behavior with the limited knowledge we have from the instrumental, historic, and paleoseismic records of past earthquakes. Another, but slow process for large events, is to use them to make predictions about future earthquake occurrence and to evaluate how well the predictions match what occurs. A final approach is to compare the results of many varied earthquake simulators to determine the extent to which the results depend on the details of the approaches and assumptions made by each simulator. Five independently developed simulators, capable of running simulations on complicated geometries containing multiple faults, are in use by some of the authors of this abstract. Although similar in their overall purpose and design, these simulators differ from one another widely in their details in many important ways. They require as input for each fault element a value for the average slip rate as well as a value for friction parameters or stress reduction due to slip. They share the use of the boundary element method to compute stress transfer between elements. None use dynamic stress transfer by seismic waves. A notable difference is the assumption different simulators make about the constitutive properties of the faults. The earthquake simulator comparison project is designed to allow comparisons among the simulators and between the simulators and past earthquake history. The project uses sets of increasingly detailed

  13. Simulating Earthquake Early Warning Systems in the Classroom as a New Approach to Teaching Earthquakes

    Science.gov (United States)

    D'Alessio, M. A.

    2010-12-01

    A discussion of P- and S-waves seems an ubiquitous part of studying earthquakes in the classroom. Textbooks from middle school through university level typically define the differences between the waves and illustrate the sense of motion. While many students successfully memorize the differences between wave types (often utilizing the first letter as a memory aide), textbooks rarely give tangible examples of how the two waves would "feel" to a person sitting on the ground. One reason for introducing the wave types is to explain how to calculate earthquake epicenters using seismograms and travel time charts -- very abstract representations of earthquakes. Even when the skill is mastered using paper-and-pencil activities or one of the excellent online interactive versions, locating an epicenter simply does not excite many of our students because it evokes little emotional impact, even in students located in earthquake-prone areas. Despite these limitations, huge numbers of students are mandated to complete the task. At the K-12 level, California requires that all students be able to locate earthquake epicenters in Grade 6; in New York, the skill is a required part of the Regent's Examination. Recent innovations in earthquake early warning systems around the globe give us the opportunity to address the same content standard, but with substantially more emotional impact on students. I outline a lesson about earthquakes focused on earthquake early warning systems. The introductory activities include video clips of actual earthquakes and emphasize the differences between the way P- and S-waves feel when they arrive (P arrives first, but is weaker). I include an introduction to the principle behind earthquake early warning (including a summary of possible uses of a few seconds warning about strong shaking) and show examples from Japan. Students go outdoors to simulate P-waves, S-waves, and occupants of two different cities who are talking to one another on cell phones

  14. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and geographical information systems. The proposed framework comprises several modules: A module on the probabilistic description of potential future earthquake shaking intensity, a module on the probabilistic assessment of spatial variability of soil liquefaction, a module on damage assessment of buildings...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...

  15. Earthquake Complex Network applied along the Chilean Subduction Zone.

    Science.gov (United States)

    Martin, F.; Pasten, D.; Comte, D.

    2017-12-01

    In recent years the earthquake complex networks have been used as a useful tool to describe and characterize the behavior of seismicity. The earthquake complex network is built in space, dividing the three dimensional space in cubic cells. If the cubic cell contains a hypocenter, we call this cell like a node. The connections between nodes follows the time sequence of the occurrence of the seismic events. In this sense, we have a spatio-temporal configuration of a specific region using the seismicity in that zone. In this work, we are applying complex networks to characterize the subduction zone along the coast of Chile using two networks: a directed and an undirected network. The directed network takes in consideration the time-direction of the connections, that is very important for the connectivity of the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out from the node i and we add the self-connections (if two seismic events occurred successive in time in the same cubic cell, we have a self-connection). The undirected network is the result of remove the direction of the connections and the self-connections from the directed network. These two networks were building using seismic data events recorded by CSN (Chilean Seismological Center) in Chile. This analysis includes the last largest earthquakes occurred in Iquique (April 2014) and in Illapel (September 2015). The result for the directed network shows a change in the value of the critical exponent along the Chilean coast. The result for the undirected network shows a small-world behavior without important changes in the topology of the network. Therefore, the complex network analysis shows a new form to characterize the Chilean subduction zone with a simple method that could be compared with another methods to obtain more details about the behavior of the seismicity in this region.

  16. Tsunamigenic earthquake simulations using experimentally derived friction laws

    Science.gov (United States)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  17. Earthquake magnitude time series: scaling behavior of visibility networks

    Science.gov (United States)

    Aguilar-San Juan, B.; Guzmán-Vargas, L.

    2013-11-01

    We present a statistical analysis of earthquake magnitude sequences in terms of the visibility graph method. Magnitude time series from Italy, Southern California, and Mexico are transformed into networks and some organizational graph properties are discussed. Connectivities are characterized by a scale-free distribution with a noticeable effect for large scales due to either the presence or the lack of large events. Also, a scaling behavior is observed between different node measures like betweenness centrality, clustering coefficient, nearest neighbor connectivity, and earthquake magnitude. Moreover, parameters which quantify the difference between forward and backward links, are proposed to evaluate the asymmetry of visibility attachment mechanism. Our results show an alternating average behavior of these parameters as earthquake magnitude changes. Finally, we evaluate the effects of reducing temporal and spatial windows of observation upon visibility network properties for main-shocks.

  18. Urban MEMS based seismic network for post-earthquakes rapid disaster assessment

    Science.gov (United States)

    D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe

    2014-05-01

    worship. The waveforms recorded could be promptly used to determine ground-shaking parameters, like peak ground acceleration/velocity/displacement, Arias and Housner intensity, that could be all used to create, few seconds after a strong earthquakes, shaking maps at urban scale. These shaking maps could allow to quickly identify areas of the town center that have had the greatest earthquake resentment. When a strong seismic event occur, the beginning of the ground motion observed at the site could be used to predict the ensuing ground motion at the same site and so to realize a short term earthquake early warning system. The data acquired after a moderate magnitude earthquake, would provide valuable information for the detail seismic microzonation of the area based on direct earthquake shaking observations rather than from a model-based or indirect methods. In this work, we evaluate the feasibility and effectiveness of such seismic network taking in to account both technological, scientific and economic issues. For this purpose, we have simulated the creation of a MEMS based urban seismic network in a medium size city. For the selected town, taking into account the instrumental specifics, the array geometry and the environmental noise, we investigated the ability of the planned network to detect and measure earthquakes of different magnitude generated from realistic near seismogentic sources.

  19. The 2016 Kaikōura Earthquake Revealed by Kinematic Source Inversion and Seismic Wavefield Simulations: Slow Rupture Propagation on a Geometrically Complex Crustal Fault Network

    Science.gov (United States)

    Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.

    2017-11-01

    The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.

  20. Test Problems for Coupled Earthquake-Tsunami Simulations

    Science.gov (United States)

    Behrens, Jörn; Bader, Michael; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Rahnema, Kaveh; Ulrich, Thomas; Uphoff, Carsten; Vater, Stefan; Wollherr, Stephanie; van Zelst, Iris

    2016-04-01

    For the project "Advanced Simulation of Coupled Earthquake and Tsunami Events" (ASCETE, funded by the Volkswagen Foundation), a simulation framework for coupled physics-based earthquake rupture generation with tsunami propagation and inundation has been developed. The rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh. It is able to accurately represent complex geometries, is highly parallelized, and works efficiently in high-performance computing environments. An adaptive mesh discretizing the shallow water equations with a Runge-Kutta discontinuous Galerkin (RKDG) scheme subsequently allows for an accurate and efficient representation of the tsunami evolution and inundation at the coast. We aim to validate and understand this new coupled framework between the dynamic earthquake within the earth's crust and the resulting tsunami wave within the ocean using a simplified model setup. The earthquake setup includes a planar, shallowly dipping subduction fault with linear depth-dependent initial stress and strength in a homogeneous elastic medium. Resulting sea floor displacements along an initially planar (and later realistic) bathymetry profile are transferred to the tsunami setup with an initially simple coastal run-up profile. We present preliminary evaluations of the rupture behavior and its interaction with the hydrodynamic wave propagation and coastal inundation. Once validated in this simplified setup, we will constrain the earthquake initial stress and strength conditions from realistic and physically consistent seismo-thermo-mechanical modeling on long timescales.

  1. Preseismic TEC Changes for Tohoku-Oki Earthquake: Comparisons Between Simulations and Observations

    Directory of Open Access Journals (Sweden)

    Cheng-Ling Kuo

    2015-01-01

    Full Text Available Heki (2011 reported that the Japanese Global Positioning System (GPS dense network detected a precursory positive total electron content anomaly (TEC, with ΔTEC ~3 TECU, ~40 minutes before the Tohoku-Oki earthquake (Mw 9.0. Similar preseismic TEC anomalies were also observed in the 2010 Chile earthquake (Mw 8.8, 2004 Sumatra-Andaman (Mw 9.2 and the 1994 Hokkaido-Toho-Oki (Mw 8.3. In this paper we apply our improved lithosphere-atmosphere-ionosphere (LAI coupling model to compute the TEC variations and compare the simulation results with the reported TEC observations. For the Tohoku-Oki earthquake simulations we assumed that the stressed associated current started ~40 minutes before the earthquake, linearly increased and reached its maximum magnitude at the time of the earthquake main shock. It is suggested that a dynamo current density of ~25 nA m-2 is required to produce the observed ΔTEC ~3 TECU.

  2. Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network

    Science.gov (United States)

    Zulfikar, Can; Kariptas, Cagatay; Biyikoglu, Hikmet; Ozarpa, Cevat

    2017-04-01

    Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network Istanbul Natural Gas Distribution Corporation (IGDAS) is one of the end users of the Istanbul Earthquake Early Warning (EEW) signal. IGDAS, the primary natural gas provider in Istanbul, operates an extensive system 9,867km of gas lines with 750 district regulators and 474,000 service boxes. The natural gas comes to Istanbul city borders with 70bar in 30inch diameter steel pipeline. The gas pressure is reduced to 20bar in RMS stations and distributed to district regulators inside the city. 110 of 750 district regulators are instrumented with strong motion accelerometers in order to cut gas flow during an earthquake event in the case of ground motion parameters exceeds the certain threshold levels. Also, state of-the-art protection systems automatically cut natural gas flow when breaks in the gas pipelines are detected. IGDAS uses a sophisticated SCADA (supervisory control and data acquisition) system to monitor the state-of-health of its pipeline network. This system provides real-time information about quantities related to pipeline monitoring, including input-output pressure, drawing information, positions of station and RTU (remote terminal unit) gates, slum shut mechanism status at 750 district regulator sites. IGDAS Real-time Earthquake Risk Reduction algorithm follows 4 stages as below: 1) Real-time ground motion data transmitted from 110 IGDAS and 110 KOERI (Kandilli Observatory and Earthquake Research Institute) acceleration stations to the IGDAS Scada Center and KOERI data center. 2) During an earthquake event EEW information is sent from IGDAS Scada Center to the IGDAS stations. 3) Automatic Shut-Off is applied at IGDAS district regulators, and calculated parameters are sent from stations to the IGDAS Scada Center and KOERI. 4) Integrated building and gas pipeline damage maps are prepared immediately after the earthquake event. The today's technology allows to rapidly estimate the

  3. A decision support system for pre-earthquake planning of lifeline networks

    Energy Technology Data Exchange (ETDEWEB)

    Liang, J.W. [Tianjin Univ. (China). Dept. of Civil Engineering

    1996-12-01

    This paper describes the frame of a decision support system for pre-earthquake planning of gas and water networks. The system is mainly based on the earthquake experiences and lessons from the 1976 Tangshan earthquake. The objective of the system is to offer countermeasures and help make decisions for seismic strengthening, remaking, and upgrading of gas and water networks.

  4. Simulation of earthquakes with cellular automata

    Directory of Open Access Journals (Sweden)

    P. G. Akishin

    1998-01-01

    Full Text Available The relation between cellular automata (CA models of earthquakes and the Burridge–Knopoff (BK model is studied. It is shown that the CA proposed by P. Bak and C. Tang,although they have rather realistic power spectra, do not correspond to the BK model. We present a modification of the CA which establishes the correspondence with the BK model.An analytical method of studying the evolution of the BK-like CA is proposed. By this method a functional quadratic in stress release, which can be regarded as an analog of the event energy, is constructed. The distribution of seismic events with respect to this “energy” shows rather realistic behavior, even in two dimensions. Special attention is paid to two-dimensional automata; the physical restrictions on compression and shear stiffnesses are imposed.

  5. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  6. Dynamic fracture network around faults: implications for earthquake ruptures, ground motion and energy budget

    Science.gov (United States)

    Okubo, K.; Bhat, H. S.; Rougier, E.; Lei, Z.; Knight, E. E.; Klinger, Y.

    2017-12-01

    Numerous studies have suggested that spontaneous earthquake ruptures can dynamically induce failure in secondary fracture network, regarded as damage zone around faults. The feedbacks of such fracture network play a crucial role in earthquake rupture, its radiated wave field and the total energy budget. A novel numerical modeling tool based on the combined finite-discrete element method (FDEM), which accounts for the main rupture propagation and nucleation/propagation of secondary cracks, was used to quantify the evolution of the fracture network and evaluate its effects on the main rupture and its associated radiation. The simulations were performed with the FDEM-based software tool, Hybrid Optimization Software Suite (HOSSedu) developed by Los Alamos National Laboratory. We first modeled an earthquake rupture on a planar strike-slip fault surrounded by a brittle medium where secondary cracks can be nucleated/activated by the earthquake rupture. We show that the secondary cracks are dynamically generated dominantly on the extensional side of the fault, mainly behind the rupture front, and it forms an intricate network of fractures in the damage zone. The rupture velocity thereby significantly decreases, by 10 to 20 percent, while the supershear transition length increases in comparison to the one with purely elastic medium. It is also observed that the high-frequency component (10 to 100 Hz) of the near-field ground acceleration is enhanced by the dynamically activated fracture network, consistent with field observations. We then conducted the case study in depth with various sets of initial stress state, and friction properties, to investigate the evolution of damage zone. We show that the width of damage zone decreases in depth, forming "flower-like" structure as the characteristic slip distance in linear slip-weakening law, or the fracture energy on the fault, is kept constant with depth. Finally, we compared the fracture energy on the fault to the energy

  7. Artificial Neural Networks for Earthquake Early-Warning

    Science.gov (United States)

    Boese, M.; Erdik, M.; Wenzel, F.

    2003-12-01

    The rapid urbanization and industrial development in areas of high seismic hazard increase the threat to human life and the vulnerability of industrial facilities by earthquakes. As earthquake prediction is elusive and, most likely, will not be achievable in near future, early-warning systems play a key role in earthquake loss reduction. Seismic waves propagate with significant lower velocity than information on these waves can be passed along to a vulnerable area or facility using modern telemetry systems. Within shortest time an earthquake early-warning system estimates the ground motion that will be caused by the oscillating seismic waves in the endangered area. Dependent on the predicted possible damage appropriate automatisms for loss reduction (such as the stoppage of trains or the interruption of gas pipelines) are triggered and executed some seconds to minutes before the devastating waves actually arrive. The Turkish megacity Istanbul faces a seismic hazard of particular severity due to its proximity to the complex fault system in the Marmara region. The likelihood for an seismic event of moment magnitude above 7.2 to occur within the next 30 years is estimated to be 70%. The Istanbul Earthquake Rapid Response and Early-Warning System (IERREWS) is an important contribution to be prepared for future earthquakes in the region. The system is operated by the Kandilli Observatory and the Earthquake Research Institute of the Bogazici University in cooperation with other agencies. The early-warning part of IERREWS consists of ten strong motion stations with 24-bit resolution, communication links and processing facilities. The accelerometers are installed on the shoreline of the Marmara Sea and are operated in on-line mode for continuous and near-real time transfer of data. Using the example of the IERREWS station configuration and seismic background of the Marmara region we present an approach that considers the problem of earthquake early-warning as a pattern

  8. Neural Network Methodology for Earthquake Early Warning - first applications

    Science.gov (United States)

    Wenzel, F.; Koehler, N.; Cua, G.; Boese, M.

    2007-12-01

    PreSEIS is a method for earthquake early warning for finite faults (Böse, 2006) that is based on Artificial Neural Networks (ANN's), which are used for the mapping of seismic observations onto likely source parameters, including the moment magnitude and the location of an earthquake. PreSEIS integrates all available information on ground shaking at different sensors in a seismic network and up-dates the estimates of seismic source parameters regularly with proceeding time. PreSEIS has been developed and tested with synthetic waveform data using the example of Istanbul, Turkey (Böse, 2006). We will present first results of the application of PreSEIS to real data from Southern California, recorded at stations from the Southern California Seismic Network. The dataset consists of 69 shallow local earthquakes with moment magnitudes ranging between 1.96 and 7.1. The data come from broadband (20 or 40 Hz) or high broadband (80 or 100 Hz), high gain channels (3-component). The Southern California dataset will allow a comparison of our results to those of the Virtual Seismologist (Cua, 2004). We used the envelopes of the waveforms defined by Cua (2004) as input for the ANN's. The envelopes were obtained by taking the maximum absolute amplitude value of the recorded ground motion time history over a 1-second time window. Due to the fact that not all of the considered stations have recorded each earthquake, the missing records were replaced by synthetic envelopes, calculated by envelope attenuation relationships developed by Cua (2004).

  9. A geographical and multi-criteria vulnerability assessment of transportation networks against extreme earthquakes

    International Nuclear Information System (INIS)

    Kermanshah, A.; Derrible, S.

    2016-01-01

    The purpose of this study is to provide a geographical and multi-criteria vulnerability assessment method to quantify the impacts of extreme earthquakes on road networks. The method is applied to two US cities, Los Angeles and San Francisco, both of which are susceptible to severe seismic activities. Aided by the recent proliferation of data and the wide adoption of Geography Information Systems (GIS), we use a data-driven approach using USGS ShakeMaps to determine vulnerable locations in road networks. To simulate the extreme earthquake, we remove road sections within “very strong” intensities provided by USGS. Subsequently, we measure vulnerability as a percentage drop in four families of metrics: overall properties (length of remaining system); topological indicators (betweenness centrality); accessibility; and travel demand using Longitudinal Employment Household Dynamics (LEHD) data. The various metrics are then plotted on a Vulnerability Surface (VS), from which the area can be assimilated to an overall vulnerability indicator. This VS approach offers a simple and pertinent method to capture the impacts of extreme earthquake. It can also be useful to planners to assess the robustness of various alternative scenarios in their plans to ensure that cities located in seismic areas are better prepared to face severe earthquakes. - Highlights: • Developed geographical and multi-criteria vulnerability assessment method. • Quantify the impacts of extreme earthquakes on transportation networks. • Data-driven approach using USGS ShakeMaps to determine vulnerable locations. • Measure vulnerability as a percentage drop in four families of metrics: ○Overall properties. ○Topological indicators. ○Accessibility. ○Travel demand using Longitudinal Employment Household Dynamics (LEHD) data. • Developed Vulnerability Surface (VS), a new pragmatic vulnerability indicator.

  10. Training Neural Networks Based on Imperialist Competitive Algorithm for Predicting Earthquake Intensity

    OpenAIRE

    Moradi, Mohsen

    2017-01-01

    In this study we determined neural network weights and biases by Imperialist Competitive Algorithm (ICA) in order to train network for predicting earthquake intensity in Richter. For this reason, we used dependent parameters like earthquake occurrence time, epicenter's latitude and longitude in degree, focal depth in kilometer, and the seismological center distance from epicenter and earthquake focal center in kilometer which has been provided by Berkeley data base. The studied neural network...

  11. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  12. Assessing earthquake early warning using sparse networks in developing countries: Case study of the Kyrgyz Republic

    Science.gov (United States)

    Parolai, Stefano; Boxberger, Tobias; Pilz, Marco; Fleming, Kevin; Haas, Michael; Pittore, Massimiliano; Petrovic, Bojana; Moldobekov, Bolot; Zubovich, Alexander; Lauterjung, Joern

    2017-09-01

    The first real-time digital strong-motion network in Central Asia has been installed in the Kyrgyz Republic since 2014. Although this network consists of only 19 strong-motion stations, they are located in near-optimal locations for earthquake early warning and rapid response purposes. In fact, it is expected that this network, which utilizes the GFZ-Sentry software, allowing decentralized event assessment calculations, not only will provide useful strong motion data useful for improving future seismic hazard and risk assessment, but will serve as the backbone for regional and on-site earthquake early warning operations. Based on the location of these stations, and travel-time estimates for P- and S-waves, we have determined potential lead times for several major urban areas in Kyrgyzstan (i.e., Bishkek, Osh, and Karakol) and Kazakhstan (Almaty), where we find the implementation of an efficient earthquake early warning system would provide lead times outside the blind zone ranging from several seconds up to several tens of seconds. This was confirmed by the simulation of the possible shaking (and intensity) that would arise considering a series of scenarios based on historical and expected events, and how they affect the major urban centres. Such lead times would allow the instigation of automatic mitigation procedures, while the system as a whole would support prompt and efficient actions to be undertaken over large areas.

  13. On the reliability of Quake-Catcher Network earthquake detections

    Science.gov (United States)

    Yildirim, Battalgazi; Cochran, Elizabeth S.; Chung, Angela I.; Christensen, Carl M.; Lawrence, Jesse F.

    2015-01-01

    Over the past two decades, there have been several initiatives to create volunteer‐based seismic networks. The Personal Seismic Network, proposed around 1990, used a short‐period seismograph to record earthquake waveforms using existing phone lines (Cranswick and Banfill, 1990; Cranswicket al., 1993). NetQuakes (Luetgert et al., 2010) deploys triaxial Micro‐Electromechanical Systems (MEMS) sensors in private homes, businesses, and public buildings where there is an Internet connection. Other seismic networks using a dense array of low‐cost MEMS sensors are the Community Seismic Network (Clayton et al., 2012; Kohler et al., 2013) and the Home Seismometer Network (Horiuchi et al., 2009). One main advantage of combining low‐cost MEMS sensors and existing Internet connection in public and private buildings over the traditional networks is the reduction in installation and maintenance costs (Koide et al., 2006). In doing so, it is possible to create a dense seismic network for a fraction of the cost of traditional seismic networks (D’Alessandro and D’Anna, 2013; D’Alessandro, 2014; D’Alessandro et al., 2014).

  14. Simulation of scenario earthquake influenced field by using GIS

    Science.gov (United States)

    Zuo, Hui-Qiang; Xie, Li-Li; Borcherdt, R. D.

    1999-07-01

    The method for estimating the site effect on ground motion specified by Borcherdt (1994a, 1994b) is briefly introduced in the paper. This method and the detail geological data and site classification data in San Francisco bay area of California, the United States, are applied to simulate the influenced field of scenario earthquake by GIS technology, and the software for simulating has been drawn up. The paper is a partial result of cooperative research project between China Seismological Bureau and US Geological Survey.

  15. Earthquakes.

    Science.gov (United States)

    Pakiser, Louis C.

    One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

  16. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  17. Natural gas network resiliency to a "shakeout scenario" earthquake.

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, James F.; Corbet, Thomas Frank,; Brooks, Robert E.

    2013-06-01

    A natural gas network model was used to assess the likely impact of a scenario San Andreas Fault earthquake on the natural gas network. Two disruption scenarios were examined. The more extensive damage scenario assumes the disruption of all three major corridors bringing gas into southern California. If withdrawals from the Aliso Canyon storage facility are limited to keep the amount of stored gas within historical levels, the disruption reduces Los Angeles Basin gas supplies by 50%. If Aliso Canyon withdrawals are only constrained by the physical capacity of the storage system to withdraw gas, the shortfall is reduced to 25%. This result suggests that it is important for stakeholders to put agreements in place facilitating the withdrawal of Aliso Canyon gas in the event of an emergency.

  18. Spin network quantum simulator

    OpenAIRE

    Marzuoli, Annalisa; Rasetti, Mario

    2002-01-01

    We propose a general setting for a universal representation of the quantum structure on which quantum information stands, whose dynamical evolution (information manipulation) is based on angular momentum recoupling theory. Such scheme complies with the notion of 'quantum simulator' in the sense of Feynmann, and is shown to be related with the topological quantum field theory approach to quantum computation.

  19. A spectrum-compatible earthquake simulation method with simultaneous adjustment to a typical earthquake energy time distribution

    International Nuclear Information System (INIS)

    Alvarez, L.M.

    1993-01-01

    There are several aspects of the structural dynamic design, such as soil structure interaction phenomena, seismic qualification of structural and mechanic components and non-linear dynamic behaviour of special components under a strong seismic excitation, which are advantageously analyzed in the Time Domain. A proper application of this method requires an extensive and comprehensive data base of real and/or simulated earthquake records, conveniently scaled to satisfy the Target Response Spectrum. Most of the present methods for synthetic earthquake records generation are only limited to adjust the ordinates of a Target Response Spectrum in a statistical way and do not take into account the time distribution of the energy content of real earthquakes. This paper proposes a method for the generation of artificial earthquake records based on a linear combination of a set of damped sinusoidal waves, whose control variables are amplitude, frequency and time shifting from the beginning of each individual wave to the record origin. By means of a very simple algorithm, based on the superposition of the time distribution of the energy content in every single wave, a simultaneous fitting of a Target Response Spectrum and of a statistical time distribution of earthquake energy has been achieved. The time energy distribution has been evaluated, statistically, by means of the average time distribution of the well-known Arias Intensity Integral on a set of more than one hundred different real earthquake records. The proposed method ensures a more realistic definition of the influence of a simulated earthquake upon a real structure because it can generate acceleration records with stationary or non-stationary frequency contents. It can also provide a real time distribution to the earthquake total energy content, which can be advantageously used for the analysis of the development of structural ductility, that -in turn- is closely related to earthquakes' time energy distribution

  20. Development of integrated earthquake simulation system for Istanbul

    Science.gov (United States)

    Sahin, Abdurrahman; Sisman, Rafet; Askan, Aysegul; Hori, Muneo

    2016-07-01

    Recent advances in computing have brought a new and challenging way to tackle the earthquake hazard and disaster problems: integration of the seismic actions in the form of numerical models. For this purpose, integrated earthquake simulation (IES) has been developed in Japan, and now a new version is being developed in Turkey which targets Istanbul. This version of IES is being built in MATLAB and includes site response analysis and structural analysis of existing buildings with data obtained via GIS databases. In this study, we present an initial application in Zeytinburnu district of Istanbul where the results are expressed in the form of spatial distributions of ground motion and building responses. At the end of the analysis, it is seen that most of the buildings make small displacements and the displacement values are directly proportional to the total height of the structures. Since the obtained ground motion distribution and peak values are not very high, structural damage has not been observed under the current simulation. The effect of bedrock depth and soil parameters on strong ground motion distribution has been observed. The most effective ground motion locations in the selected area have been determined, and the critical buildings that have maximum displacement during the earthquake motion are detected. Currently, the IES on MATLAB does not include the source to bedrock wave propagation mechanism and the resulting ground motions at each grid point. In future studies, alternative models for this purpose along with input model parameters for Istanbul will be applied. Once the source-to-structure integrated model is complete, past earthquakes as well as potential scenario events in Istanbul will be modeled in the final form of IES on MATLAB. Results will be valuable for a variety of purposes ranging from disaster mitigation to emergency management. In future part of this study, site vibration tests will also be made for buildings that do not comply with

  1. MyShake: A smartphone seismic network for earthquake early warning and beyond.

    Science.gov (United States)

    Kong, Qingkai; Allen, Richard M; Schreier, Louis; Kwon, Young-Woo

    2016-02-01

    Large magnitude earthquakes in urban environments continue to kill and injure tens to hundreds of thousands of people, inflicting lasting societal and economic disasters. Earthquake early warning (EEW) provides seconds to minutes of warning, allowing people to move to safe zones and automated slowdown and shutdown of transit and other machinery. The handful of EEW systems operating around the world use traditional seismic and geodetic networks that exist only in a few nations. Smartphones are much more prevalent than traditional networks and contain accelerometers that can also be used to detect earthquakes. We report on the development of a new type of seismic system, MyShake, that harnesses personal/private smartphone sensors to collect data and analyze earthquakes. We show that smartphones can record magnitude 5 earthquakes at distances of 10 km or less and develop an on-phone detection capability to separate earthquakes from other everyday shakes. Our proof-of-concept system then collects earthquake data at a central site where a network detection algorithm confirms that an earthquake is under way and estimates the location and magnitude in real time. This information can then be used to issue an alert of forthcoming ground shaking. MyShake could be used to enhance EEW in regions with traditional networks and could provide the only EEW capability in regions without. In addition, the seismic waveforms recorded could be used to deliver rapid microseism maps, study impacts on buildings, and possibly image shallow earth structure and earthquake rupture kinematics.

  2. Rapid response seismic networks in Europe: lessons learnt from the L'Aquila earthquake emergency

    Directory of Open Access Journals (Sweden)

    Angelo Strollo

    2011-08-01

    Full Text Available

    The largest dataset ever recorded during a normal fault seismic sequence was acquired during the 2009 seismic emergency triggered by the damaging earthquake in L'Aquila (Italy. This was possible through the coordination of different rapid-response seismic networks in Italy, France and Germany. A seismic network of more than 60 stations recorded up to 70,000 earthquakes. Here, we describe the different open-data archives where it is possible to find this unique set of data for studies related to hazard, seismotectonics and earthquake physics. Moreover, we briefly describe some immediate and direct applications of emergency seismic networks. At the same time, we note the absence of communication platforms between the different European networks. Rapid-response networks need to agree on common strategies for network operations. Hopefully, over the next few years, the European Rapid-Response Seismic Network will became a reality.

  3. PBO Southwest Region: Baja Earthquake Response and Network Operations

    Science.gov (United States)

    Walls, C. P.; Basset, A.; Mann, D.; Lawrence, S.; Jarvis, C.; Feaux, K.; Jackson, M. E.

    2011-12-01

    The SW region of the Plate Boundary Observatory consists of 455 continuously operating GPS stations located principally along the transform system of the San Andreas fault and Eastern California Shear Zone. In the past year network uptime exceeded an average of 97% with greater than 99% data acquisition. Communications range from CDMA modem (307), radio (92), Vsat (30), DSL/T1/other (25) to manual downloads (1). Sixty-three stations stream 1 Hz data over the VRS3Net typically with theft) to moderate vandalism (solar panel stolen) with one total loss of receiver and communications gear. Security was enhanced at these sites through fencing and more secure station configurations. In the past 12 months, 4 new stations were installed to replace removed stations or to augment the network at strategic locations. Following the M7.2 El Mayor-Cucapah earthquake CGPS station P796, a deep-drilled braced monument, was constructed in San Luis, AZ along the border within 5 weeks of the event. In addition, UNAVCO participated in a successful University of Arizona-led RAPID proposal for the installation of six continuous GPS stations for post-seismic observations. Six stations are installed and telemetered through a UNAM relay at the Sierra San Pedro Martir. Four of these stations have Vaisala WXT520 meteorological sensors. An additional site in the Sierra Cucapah (PTAX) that was built by CICESE, an Associate UNAVCO Member institution in Mexico, and Caltech has been integrated into PBO dataflow. The stations will be maintained as part of the PBO network in coordination with CICESE. UNAVCO is working with NOAA to upgrade PBO stations with WXT520 meteorological sensors and communications systems capable of streaming real-time GPS and met data. The real-time GPS and meteorological sensor data streaming support watershed and flood analyses for regional early-warning systems related to NOAA's work with California Department of Water Resources. Currently 19 stations are online and

  4. Earthquake simulations with time-dependent nucleation and long-range interactions

    Directory of Open Access Journals (Sweden)

    J. H. Dieterich

    1995-01-01

    Full Text Available A model for rapid simulation of earthquake sequences is introduced which incorporates long-range elastic interactions among fault elements and time-dependent earthquake nucleation inferred from experimentally derived rate- and state-dependent fault constitutive properties. The model consists of a planar two-dimensional fault surface which is periodic in both the x- and y-directions. Elastic interactions among fault elements are represented by an array of elastic dislocations. Approximate solutions for earthquake nucleation and dynamics of earthquake slip are introduced which permit computations to proceed in steps that are determined by the transitions from one sliding state to the next. The transition-driven time stepping and avoidance of systems of simultaneous equations permit rapid simulation of large sequences of earthquake events on computers of modest capacity, while preserving characteristics of the nucleation and rupture propagation processes evident in more detailed models. Earthquakes simulated with this model reproduce many of the observed spatial and temporal characteristics of clustering phenomena including foreshock and aftershock sequences. Clustering arises because the time dependence of the nucleation process is highly sensitive to stress perturbations caused by nearby earthquakes. Rate of earthquake activity following a prior earthquake decays according to Omori's aftershock decay law and falls off with distance.

  5. Hybrid Simulations of the Broadband Ground Motions for the 2008 MS8.0 Wenchuan, China, Earthquake

    Science.gov (United States)

    Yu, X.; Zhang, W.

    2012-12-01

    The Ms8.0 Wenchuan earthquake occurred on 12 May 2008 at 14:28 Beijing Time. It is the largest event happened in the mainland of China since the 1976, Mw7.6, Tangshan earthquake. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and "quake lakes" which formed by landslide-induced reservoirs. These resulted in tremendous losses of life and property. Casualties numbered more than 80,000 people, and there were major economic losses. However, this earthquake is the first Ms 8 intraplate earthquake with good close fault strong motion coverage. Over four hundred strong motion stations of the National Strong Motion Observation Network System (NSMONS) recorded the mainshock. Twelve of them located within 20 km of the fault traces and another 33 stations located within 100 km. These observations, accompanying with the hundreds of GPS vectors and multiple ALOS INSAR images, provide an unprecedented opportunity to study the rupture process of such a great intraplate earthquake. In this study, we calculate broadband near-field ground motion synthetic waveforms of this great earthquake using a hybrid broadband ground-motion simulation methodology, which combines a deterministic approach at low frequencies (f < 1.0 Hz) with a theoretic Green's function calculation approach at high frequency ( ~ 10.0 Hz). The fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time that were obtained by an inversion kinematic source model. At the same time, based on the aftershock data, we analyze the site effects for the near-field stations. Frequency-dependent site-amplification values for each station are calculated using genetic algorithms. For the calculation of the synthetic waveforms, at first, we carry out simulations using the hybrid methodology for the frequency up to 10.0 Hz. Then, we consider for

  6. The ShakeOut earthquake source and ground motion simulations

    Science.gov (United States)

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  7. Connection with seismic networks and construction of real time earthquake monitoring system

    International Nuclear Information System (INIS)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S.

    2000-12-01

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system

  8. Earthquakes

    Science.gov (United States)

    ... Centers Evacuation Center Play Areas Animals in Public Evacuation Centers Pet Shelters Interim Guidelines for Animal Health and Control of Disease Transmission in Pet Shelters Protect Your Pets Earthquakes Language: English (US) Español (Spanish) Recommend on Facebook ...

  9. Prediction of Permanent Earthquake-Induced Deformation in Earth Dams and Embankments Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Kazem Barkhordari

    2015-12-01

    Full Text Available This research intends to develop a method based on the Artificial Neural Network (ANN to predict permanent earthquake-induced deformation of the earth dams and embankments. For this purpose, data sets of observations from 152 published case histories on the performance of the earth dams and embankments, during the past earthquakes, was used. In order to predict earthquake-induced deformation of the earth dams and embankments a Multi-Layer Perceptron (MLP analysis was used. A four-layer, feed-forward, back-propagation neural network, with a topology of 7-9-7-1 was found to be optimum. The results showed that an appropriately trained neural network could reliably predict permanent earthquake-induced deformation of the earth dams and embankments.

  10. Damage Level Prediction of Reinforced Concrete Building Based on Earthquake Time History Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Suryanita Reni

    2017-01-01

    Full Text Available The strong motion earthquake could cause the building damage in case of the building not considered in the earthquake design of the building. The study aims to predict the damage-level of building due to earthquake using Artificial Neural Networks method. The building model is a reinforced concrete building with ten floors and height between floors is 3.6 m. The model building received a load of the earthquake based on nine earthquake time history records. Each time history scaled to 0,5g, 0,75g, and 1,0g. The Artificial Neural Networks are designed in 4 architectural models using the MATLAB program. Model 1 used the displacement, velocity, and acceleration as input and Model 2 used the displacement only as the input. Model 3 used the velocity as input, and Model 4 used the acceleration just as input. The output of the Neural Networks is the damage level of the building with the category of Safe (1, Immediate Occupancy (2, Life Safety (3 or in a condition of Collapse Prevention (4. According to the results, Neural Network models have the prediction rate of the damage level between 85%-95%. Therefore, one of the solutions for analyzing the structural responses and the damage level promptly and efficiently when the earthquake occurred is by using Artificial Neural Network

  11. A study on generation of simulated earthquake ground motion for seismic design of nuclear power plant

    International Nuclear Information System (INIS)

    Ichiki, Tadaharu; Matsumoto, Takuji; Kitada, Yoshio; Osaki, Yorihiko; Kanda, Jun; Masao, Toru.

    1985-01-01

    The aseismatic design of nuclear power generation facilities carried out in Japan at present must conform to the ''Guideline for aseismatic design examination regarding power reactor facilities'' decided by the Atomic Energy Commission in 1978. In this guideline, the earthquake motion used for the analysis of dynamic earthquake response is to be given in the form of the magnitude determined on the basis of the investigation of historical earthquakes and active faults around construction sites and the response spectra corresponding to the distance from epicenters. Accordingly when the analysis of dynamic earthquake response is actually carried out, the simulated earthquake motion made in conformity with these set up response spectra is used as the input earthquake motion for the design. For the purpose of establishing the techniques making simulated earthquake motion which is more appropriate and rational from engineering viewpoint, the research was carried out, and the results are summarized in this paper. The techniques for making simulated earthquake motion, the response of buildings and the response spectra of floors are described. (Kako, I.)

  12. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  13. Trace Replay and Network Simulation Tool

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-22

    TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.

  14. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  15. A Coupled Earthquake-Tsunami Simulation Framework Applied to the Sumatra 2004 Event

    Science.gov (United States)

    Vater, Stefan; Bader, Michael; Behrens, Jörn; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Wollherr, Stephanie; van Zelst, Iris

    2017-04-01

    Large earthquakes along subduction zone interfaces have generated destructive tsunamis near Chile in 1960, Sumatra in 2004, and northeast Japan in 2011. In order to better understand these extreme events, we have developed tools for physics-based, coupled earthquake-tsunami simulations. This simulation framework is applied to the 2004 Indian Ocean M 9.1-9.3 earthquake and tsunami, a devastating event that resulted in the loss of more than 230,000 lives. The earthquake rupture simulation is performed using an ADER discontinuous Galerkin discretization on an unstructured tetrahedral mesh with the software SeisSol. Advantages of this approach include accurate representation of complex fault and sea floor geometries and a parallelized and efficient workflow in high-performance computing environments. Accurate and efficient representation of the tsunami evolution and inundation at the coast is achieved with an adaptive mesh discretizing the shallow water equations with a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme. With the application of the framework to this historic event, we aim to better understand the involved mechanisms between the dynamic earthquake within the earth's crust, the resulting tsunami wave within the ocean, and the final coastal inundation process. Earthquake model results are constrained by GPS surface displacements and tsunami model results are compared with buoy and inundation data. This research is part of the ASCETE Project, "Advanced Simulation of Coupled Earthquake and Tsunami Events", funded by the Volkswagen Foundation.

  16. A numerical simulation strategy on occupant evacuation behaviors and casualty prediction in a building during earthquakes

    Science.gov (United States)

    Li, Shuang; Yu, Xiaohui; Zhang, Yanjuan; Zhai, Changhai

    2018-01-01

    Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

  17. Survivability Improvement Against Earthquakes in Backbone Optical Networks Using Actual Seismic Zone Information

    OpenAIRE

    Agrawal, Anuj; Sharma, Purva; Bhatia, Vimal; Prakash, Shashi

    2017-01-01

    Optical backbone networks carry a huge amount of bandwidth and serve as a key enabling technology to provide telecommunication connectivity across the world. Hence, in events of network component (node/link) failures, communication networks may suffer from huge amount of bandwidth loss and service disruptions. Natural disasters such as earthquakes, hurricanes, tornadoes, etc., occur at different places around the world, causing severe communication service disruptions due to network component...

  18. Influence of fault heterogeneity on the frequency-magnitude statistics of earthquake cycle simulations

    Science.gov (United States)

    Norbeck, Jack; Horne, Roland

    2017-04-01

    Numerical models are useful tools for investigating natural geologic conditions can affect seismicity, but it can often be difficult to generate realistic earthquake sequences using physics-based earthquake rupture models. Rate-and-state earthquake cycle simulations on planar faults with homogeneous frictional properties and stress conditions typically yield single event sequences with a single earthquake magnitude characteristic of the size of the fault. In reality, earthquake sequences have been observed to follow a Gutenberg-Richter-type frequency-magnitude distribution that can be characterized by a power law scaling relationship. The purpose of this study was to determine how fault heterogeneity can affect the frequency-magnitude distribution of simulated earthquake events. We considered the effects fault heterogeneity at two different length-scales by performing numerical earthquake rupture simulations within a rate-and-state friction framework. In our first study, we investigated how heterogeneous, fractal distributions of shear and normal stress resolved along a two-dimensional fault surface influenced the earthquake nucleation, rupture, and arrest processes. We generated a catalog of earthquake events by performing earthquake cycle simulations for 90 random realizations of fractal stress distributions. Typical realizations produced between 4 to 6 individual earthquakes ranging in event magnitudes between those characteristic of the minimum patch size for nucleation and the size of the model fault. The resulting aggregate frequency-magnitude distributions were characterized well by a power-law scaling behavior. In our second study, we performed simulations of injection-induced seismicity using a coupled fluid flow and rate-and-state earthquake model. Fluid flow in a two-dimensional reservoir was modeled, and the fault mechanics was modeled under a plane strain assumption (i.e., one-dimensional faults). We generated a set of faults with an average strike of

  19. Recognition of underground nuclear explosion and natural earthquake based on neural network

    International Nuclear Information System (INIS)

    Yang Hong; Jia Weimin

    2000-01-01

    Many features are extracted to improve the identified rate and reliability of underground nuclear explosion and natural earthquake. But how to synthesize these characters is the key of pattern recognition. Based on the improved Delta algorithm, features of underground nuclear explosion and natural earthquake are inputted into BP neural network, and friendship functions are constructed to identify the output values. The identified rate is up to 92.0%, which shows that: the way is feasible

  20. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  1. Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.

    2017-12-01

    SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.

  2. Earthquake disaster simulation of civil infrastructures from tall buildings to urban areas

    CERN Document Server

    Lu, Xinzheng

    2017-01-01

    Based on more than 12 years of systematic investigation on earthquake disaster simulation of civil infrastructures, this book covers the major research outcomes including a number of novel computational models, high performance computing methods and realistic visualization techniques for tall buildings and urban areas, with particular emphasize on collapse prevention and mitigation in extreme earthquakes, earthquake loss evaluation and seismic resilience. Typical engineering applications to several tallest buildings in the world (e.g., the 632 m tall Shanghai Tower and the 528 m tall Z15 Tower) and selected large cities in China (the Beijing Central Business District, Xi'an City, Taiyuan City and Tangshan City) are also introduced to demonstrate the advantages of the proposed computational models and techniques. The high-fidelity computational model developed in this book has proven to be the only feasible option to date for earthquake-induced collapse simulation of supertall buildings that are higher than 50...

  3. Two-dimensional fully dynamic SEM simulations of the 2011 Tohoku earthquake cycle

    Science.gov (United States)

    Shimizu, H.; Hirahara, K.

    2014-12-01

    Earthquake cycle simulations have been performed to successfully reproduce the historical earthquake occurrences. Most of them are quasi-dynamic, where inertial effects are approximated using the radiation damping proposed by Rice [1993]. Lapusta et al. [2000, 2009] developed a methodology capable of the detailed description of seismic and aseismic slip and gradual process of earthquake nucleation in the entire earthquake cycle. Their fully dynamic simulations have produced earthquake cycles considerably different from quasi-dynamic ones. Those simulations have, however, never been performed for interplate earthquakes at subduction zones. Many studies showed that on dipping faults such as interplate earthquakes at subduction zones, normal stress is changed during faulting due to the interaction with Earth's free surface. This change in normal stress not only affects the earthquake rupture process, but also causes the residual stress variation that might affect the long-term histories of earthquake cycle. Accounting for such effects, we perform two-dimensional simulations of the 2011 Tohoku earthquake cycle. Our model is in-plane and a laboratory derived rate and state friction acts on a dipping fault embedded on an elastic half-space that reaches the free surface. We extended the spectral element method (SEM) code [Ampuero, 2002] to incorporate a conforming mesh of triangles and quadrangles introduced in Komatitsch et al. [2001], which enables us to analyze the complex geometry with ease. The problem is solved by the methodology almost the same as Kaneko et al. [2011], which is the combined scheme switching in turn a fully dynamic SEM and a quasi-static SEM. The difference is the dip-slip thrust fault in our study in contrast to the vertical strike slip fault. With this method, we can analyze how the dynamic rupture with surface breakout interacting with the free surface affects the long-term earthquake cycle. We discuss the fully dynamic earthquake cycle results

  4. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  5. Program Helps Simulate Neural Networks

    Science.gov (United States)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  6. A simulation of earthquake induced undrained pore pressure ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    The Bhuj earthquake of January 26th, 2001, induced wide spread liquefaction within the Kachch peninsula. It has been pointed out that inundation due to soil liquefaction was short lived in some parts than in others in the affected region. Several geological, seismological and hydrological factors would have cumulatively ...

  7. Earthquake and nuclear explosion location using the global seismic network

    International Nuclear Information System (INIS)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30 0 distances, the largest deviation being around 10 seconds at 13-18 0 . At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations

  8. Earthquake and nuclear explosion location using the global seismic network

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, L.M.

    1983-01-01

    The relocation of nuclear explosions, aftershock sequence and regional seismicity is addressed by using joint hypocenter determination, Lomnitz' distance domain location, and origin time and earthquake depth determination with local observations. Distance domain and joint hypocenter location are used for a stepwise relocation of nuclear explosions in the USSR. The resulting origin times are 2.5 seconds earlier than those obtained by ISC. Local travel times from the relocated explosions are compared to Jeffreys-Bullen tables. P times are found to be faster at 9-30/sup 0/ distances, the largest deviation being around 10 seconds at 13-18/sup 0/. At these distances S travel times also are faster by approximately 20 seconds. The 1977 Sumba earthquake sequence is relocated by iterative joint hypocenter determination of events with most station reports. Simultaneously determined station corrections are utilized for the relocation of smaller aftershocks. The relocated hypocenters indicate that the aftershocks were initially concentrated along the deep trench. Origin times and depths are recalculated for intermediate depth and deep earthquakes using local observations in and around the Japanese Islands. It is found that origin time and depth differ systematically from ISC values for intermediate depth events. Origin times obtained for events below the crust down to 100 km depth are earlier, whereas no general bias seem to exist for origin times of events in the 100-400 km depth range. The recalculated depths for earthquakes shallower than 100 km are shallower than ISC depths. The depth estimates for earthquakes deeper than 100 km were increased by the recalculations.

  9. Finite element simulation of earthquake cycle dynamics for continental listric fault system

    Science.gov (United States)

    Wei, T.; Shen, Z. K.

    2017-12-01

    We simulate stress/strain evolution through earthquake cycles for a continental listric fault system using the finite element method. A 2-D lithosphere model is developed, with the upper crust composed of plasto-elastic materials and the lower crust/upper mantle composed of visco-elastic materials respectively. The media is sliced by a listric fault, which is soled into the visco-elastic lower crust at its downdip end. The system is driven laterally by constant tectonic loading. Slip on fault is controlled by rate-state friction. We start with a simple static/dynamic friction law, and drive the system through multiple earthquake cycles. Our preliminary results show that: (a) periodicity of the earthquake cycles is strongly modulated by the static/dynamic friction, with longer period correlated with higher static friction and lower dynamic friction; (b) periodicity of earthquake is a function of fault depth, with less frequent events of greater magnitudes occurring at shallower depth; and (c) rupture on fault cannot release all the tectonic stress in the system, residual stress is accumulated in the hanging wall block at shallow depth close to the fault, which has to be released either by conjugate faulting or inelastic folding. We are in a process of exploring different rheologic structure and friction laws and examining their effects on earthquake behavior and deformation pattern. The results will be applied to specific earthquakes and fault zones such as the 2008 great Wenchuan earthquake on the Longmen Shan fault system.

  10. What Can We Learn from a Simple Physics-Based Earthquake Simulator?

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2018-03-01

    Physics-based earthquake simulators are becoming a popular tool to investigate on the earthquake occurrence process. So far, the development of earthquake simulators is commonly led by the approach "the more physics, the better". However, this approach may hamper the comprehension of the outcomes of the simulator; in fact, within complex models, it may be difficult to understand which physical parameters are the most relevant to the features of the seismic catalog at which we are interested. For this reason, here, we take an opposite approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple simulator may be more informative than a complex one for some specific scientific objectives, because it is more understandable. Our earthquake simulator has three main components: the first one is a realistic tectonic setting, i.e., a fault data set of California; the second is the application of quantitative laws for earthquake generation on each single fault, and the last is the fault interaction modeling through the Coulomb Failure Function. The analysis of this simple simulator shows that: (1) the short-term clustering can be reproduced by a set of faults with an almost periodic behavior, which interact according to a Coulomb failure function model; (2) a long-term behavior showing supercycles of the seismic activity exists only in a markedly deterministic framework, and quickly disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault; (3) faults that are strongly coupled in terms of Coulomb failure function model are synchronized in time only in a marked deterministic framework, and as before, such a synchronization disappears introducing a small degree of stochasticity on the recurrence of earthquakes on a fault. Overall, the results show that even in a simple and perfectly known earthquake occurrence world, introducing a small degree of

  11. Simulation of Strong Ground Motion of the 2009 Bhutan Earthquake Using Modified Semi-Empirical Technique

    Science.gov (United States)

    Sandeep; Joshi, A.; Lal, Sohan; Kumar, Parveen; Sah, S. K.; Vandana; Kamal

    2017-12-01

    On 21st September 2009 an earthquake of magnitude ( M w 6.1) occurred in the East Bhutan. This earthquake caused serious damage to the residential area and was widely felt in the Bhutan Himalaya and its adjoining area. We estimated the source model of this earthquake using modified semi empirical technique. In the rupture plane, several locations of nucleation point have been considered and finalised based on the minimum root mean square error of waveform comparison. In the present work observed and simulated waveforms has been compared at all the eight stations. Comparison of horizontal components of actual and simulated records at these stations confirms the estimated parameters of final rupture model and efficacy of the modified semi-empirical technique (Joshi et al., Nat Hazards 64:1029-1054, 2012b) of strong ground motion simulation.

  12. Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses

    Science.gov (United States)

    Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.

    2017-12-01

    To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number

  13. Near-surface versus fault zone damage following the 1999 Chi-Chi earthquake: Observation and simulation of repeating earthquakes

    Science.gov (United States)

    Chen, Kate Huihsuan; Furumura, Takashi; Rubinstein, Justin L.

    2015-01-01

    We observe crustal damage and its subsequent recovery caused by the 1999 M7.6 Chi-Chi earthquake in central Taiwan. Analysis of repeating earthquakes in Hualien region, ~70 km east of the Chi-Chi earthquake, shows a remarkable change in wave propagation beginning in the year 2000, revealing damage within the fault zone and distributed across the near surface. We use moving window cross correlation to identify a dramatic decrease in the waveform similarity and delays in the S wave coda. The maximum delay is up to 59 ms, corresponding to a 7.6% velocity decrease averaged over the wave propagation path. The waveform changes on either side of the fault are distinct. They occur in different parts of the waveforms, affect different frequencies, and the size of the velocity reductions is different. Using a finite difference method, we simulate the effect of postseismic changes in the wavefield by introducing S wave velocity anomaly in the fault zone and near the surface. The models that best fit the observations point to pervasive damage in the near surface and deep, along-fault damage at the time of the Chi-Chi earthquake. The footwall stations show the combined effect of near-surface and the fault zone damage, where the velocity reduction (2–7%) is twofold to threefold greater than the fault zone damage observed in the hanging wall stations. The physical models obtained here allow us to monitor the temporal evolution and recovering process of the Chi-Chi fault zone damage.

  14. Splitting Strategy for Simulating Genetic Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Xiong You

    2014-01-01

    Full Text Available The splitting approach is developed for the numerical simulation of genetic regulatory networks with a stable steady-state structure. The numerical results of the simulation of a one-gene network, a two-gene network, and a p53-mdm2 network show that the new splitting methods constructed in this paper are remarkably more effective and more suitable for long-term computation with large steps than the traditional general-purpose Runge-Kutta methods. The new methods have no restriction on the choice of stepsize due to their infinitely large stability regions.

  15. Connection with seismic networks and construction of real time earthquake monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Heon Cheol; Lee, H. I.; Shin, I. C.; Lim, I. S.; Park, J. H.; Lee, B. K.; Whee, K. H.; Cho, C. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2000-12-15

    It is natural to use the nuclear power plant seismic network which have been operated by KEPRI(Korea Electric Power Research Institute) and local seismic network by KIGAM(Korea Institute of Geology, Mining and Material). The real time earthquake monitoring system is composed with monitoring module and data base module. Data base module plays role of seismic data storage and classification and the other, monitoring module represents the status of acceleration in the nuclear power plant area. This research placed the target on the first, networking the KIN's seismic monitoring system with KIGAM and KEPRI seismic network and the second, construction the KIN's Independent earthquake monitoring system.

  16. Building Capacity for Earthquake Monitoring: Linking Regional Networks with the Global Community

    Science.gov (United States)

    Willemann, R. J.; Lerner-Lam, A.

    2006-12-01

    Installing or upgrading a seismic monitoring network is often among the mitigation efforts after earthquake disasters, and this is happening in response to the events both in Sumatra during December 2004 and in Pakistan during October 2005. These networks can yield improved hazard assessment, more resilient buildings where they are most needed, and emergency relief directed more quickly to the worst hit areas after the next large earthquake. Several commercial organizations are well prepared for the fleeting opportunity to provide the instruments that comprise a seismic network, including sensors, data loggers, telemetry stations, and the computers and software required for the network center. But seismic monitoring requires more than hardware and software, no matter how advanced. A well-trained staff is required to select appropriate and mutually compatible components, install and maintain telemetered stations, manage and archive data, and perform the analyses that actually yield the intended benefits. Monitoring is more effective when network operators cooperate with a larger community through free and open exchange of data, sharing information about working practices, and international collaboration in research. As an academic consortium, a facility operator and a founding member of the International Federation of Digital Seismographic Networks, IRIS has access to a broad range of expertise with the skills that are required to help design, install, and operate a seismic network and earthquake analysis center, and stimulate the core training for the professional teams required to establish and maintain these facilities. But delivering expertise quickly when and where it is unexpectedly in demand requires advance planning and coordination in order to respond to the needs of organizations that are building a seismic network, either with tight time constraints imposed by the budget cycles of aid agencies following a disastrous earthquake, or as part of more informed

  17. Three dimensional viscoelastic simulation on dynamic evolution of stress field in North China induced by the 1966 Xingtai earthquake

    Science.gov (United States)

    Chen, Lian-Wang; Lu, Yuan-Zhong; Liu, Jie; Guo, Ruo-Mei

    2001-09-01

    Using three dimensional (3D) viscoelastic finite element method (FEM) we study the dynamic evolution pattern of the coseismic change of Coulomb failure stress and postseismic change, on time scale of hundreds years, of rheological effect induced by the M S=7.2 Xingtai earthquake on March 22, 1966. Then, we simulate the coseismic disturbance in stress field in North China and dynamic change rate on one-year scale caused by the Xingtai earthquake and Tangshan earthquake during 15 years from 1966 to 1980. Finally, we discuss the triggering of a strong earthquake to another future strong earthquake.

  18. Simulation and monitoring tools to protect disaster management facilities against earthquakes

    Science.gov (United States)

    Saito, Taiki

    2017-10-01

    The earthquakes that hit Kumamoto Prefecture in Japan on April 14 and 16, 2016 severely damaged over 180,000 houses, including over 8,000 that were completely destroyed and others that were partially damaged according to the Cabinet Office's report as of November 14, 2016 [1]. Following these earthquakes, other parts of the world have been struck by earthquakes including Italy and New Zealand as well as the central part of Tottori Prefecture in October, where the earthquake-induced collapse of buildings has led to severe damage and casualties. The earthquakes in Kumamoto Prefecture, in fact, damaged various disaster management facilities including Uto City Hall, which significantly hindered the city's evacuation and recovery operations. One of the most crucial issues in times of disaster is securing the functions of disaster management facilities such as city halls, hospitals and fire stations. To address this issue, seismic simulations are conducted on the East and the West buildings of Toyohashi City Hall using the analysis tool developed by the author, STERA_3D, with the data of the ground motion waveform prediction for the Nankai Trough earthquake provided by the Ministry of Land, Infrastructure, Transport and Tourism. As the result, it was found that the buildings have sufficient earthquake resistance. It turned out, however, that the west building is at risk for wall cracks or ceiling panel's collapse while in the east building, people would not be able to stand through the strong quakes of 7 on the seismic intensity scale and cabinets not secured to the floors or walls would fall over. Additionally, three IT strong-motion seismometers were installed in the city hall to continuously monitor vibrations. Every five minutes, the vibration data obtained by the seismometers are sent to the computers in Toyohashi University of Technology via the Internet for the analysis tools to run simulations in the cloud. If an earthquake strikes, it is able to use the results

  19. The ordered network structure and prediction summary for M ≥ 7 earthquakes in Xinjiang region of China

    International Nuclear Information System (INIS)

    Men, Ke-Pei; Zhao, Kai

    2014-01-01

    M ≥ 7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a x k (k = 1, 2, 3), 11 ∝ 12 a, 41 ∝ 43 a, 18 ∝ 19 a, and 5 ∝ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019-2020 and 2025-2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

  20. A Benchmarking setup for Coupled Earthquake Cycle - Dynamic Rupture - Tsunami Simulations

    Science.gov (United States)

    Behrens, Joern; Bader, Michael; van Dinther, Ylona; Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Uphoff, Carsten; Vater, Stefan; Wollherr, Stephanie; van Zelst, Iris

    2017-04-01

    We developed a simulation framework for coupled physics-based earthquake rupture generation with tsunami propagation and inundation on a simplified subduction zone system for the project "Advanced Simulation of Coupled Earthquake and Tsunami Events" (ASCETE, funded by the Volkswagen Foundation). Here, we present a benchmarking setup that can be used for complex rupture models. The workflow begins with a 2D seismo-thermo-mechanical earthquake cycle model representing long term deformation along a planar, shallowly dipping subduction zone interface. Slip instabilities that approximate earthquakes arise spontaneously along the subduction zone interface in this model. The absolute stress field and material properties for a single slip event are used as initial conditions for a dynamic earthquake rupture model.The rupture simulation is performed with SeisSol, which uses an ADER discontinuous Galerkin discretization scheme with an unstructured tetrahedral mesh. The seafloor displacements resulting from this rupture are transferred to the tsunami model with a simple coastal run-up profile. An adaptive mesh discretizing the shallow water equations with a Runge-Kutta discontinuous Galerkin (RKDG) scheme subsequently allows for an accurate and efficient representation of the tsunami evolution and inundation at the coast. This workflow allows for evaluation of how the rupture behavior affects the hydrodynamic wave propagation and coastal inundation. We present coupled results for differing earthquake scenarios. Examples include megathrust only ruptures versus ruptures with splay fault branching off the megathrust near the surface. Coupling to the tsunami simulation component is performed either dynamically (time dependent) or statically, resulting in differing tsunami wave and inundation behavior. The simplified topographical setup allows for systematic parameter studies and reproducible physical studies.

  1. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  2. Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations

    Science.gov (United States)

    Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.

    2003-12-01

    We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time

  3. Fiber-Optic Network Observations of Earthquake Wavefields

    Science.gov (United States)

    Lindsey, Nathaniel J.; Martin, Eileen R.; Dreger, Douglas S.; Freifeld, Barry; Cole, Stephen; James, Stephanie R.; Biondi, Biondo L.; Ajo-Franklin, Jonathan B.

    2017-12-01

    Our understanding of subsurface processes suffers from a profound observation bias: seismometers are sparse and clustered on continents. A new seismic recording approach, distributed acoustic sensing (DAS), transforms telecommunication fiber-optic cables into sensor arrays enabling meter-scale recording over tens of kilometers of linear fiber length. We analyze cataloged earthquake observations from three DAS arrays with different horizontal geometries to demonstrate some possibilities using this technology. In Fairbanks, Alaska, we find that stacking ground motion records along 20 m of fiber yield a waveform that shows a high degree of correlation in amplitude and phase with a colocated inertial seismometer record at 0.8-1.6 Hz. Using an L-shaped DAS array in Northern California, we record the nearly vertically incident arrival of an earthquake from The Geysers Geothermal Field and estimate its backazimuth and slowness via beamforming for different phases of the seismic wavefield. Lastly, we install a fiber in existing telecommunications conduits below Stanford University and show that little cable-to-soil coupling is required for teleseismic P and S phase arrival detection.

  4. Earthquake sequence simulations with measured properties for JFAST core samples

    Science.gov (United States)

    Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro

    2017-08-01

    Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.

  5. Buffer Management Simulation in ATM Networks

    Science.gov (United States)

    Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.

    1998-01-01

    This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.

  6. Tsunami simulations of the 1867 Virgin Island earthquake: Constraints on epicenter location and fault parameters

    Science.gov (United States)

    Barkan, Roy; ten Brink, Uri S.

    2010-01-01

    The 18 November 1867 Virgin Island earthquake and the tsunami that closely followed caused considerable loss of life and damage in several places in the northeast Caribbean region. The earthquake was likely a manifestation of the complex tectonic deformation of the Anegada Passage, which cuts across the Antilles island arc between the Virgin Islands and the Lesser Antilles. In this article, we attempt to characterize the 1867 earthquake with respect to fault orientation, rake, dip, fault dimensions, and first tsunami wave propagating phase, using tsunami simulations that employ high-resolution multibeam bathymetry. In addition, we present new geophysical and geological observations from the region of the suggested earthquake source. Results of our tsunami simulations based on relative amplitude comparison limit the earthquake source to be along the northern wall of the Virgin Islands basin, as suggested by Reid and Taber (1920), or on the carbonate platform north of the basin, and not in the Virgin Islands basin, as commonly assumed. The numerical simulations suggest the 1867 fault was striking 120°–135° and had a mixed normal and left-lateral motion. First propagating wave phase analysis suggests a fault striking 300°–315° is also possible. The best-fitting rupture length was found to be relatively small (50 km), probably indicating the earthquake had a moment magnitude of ∼7.2. Detailed multibeam echo sounder surveys of the Anegada Passage bathymetry between St. Croix and St. Thomas reveal a scarp, which cuts the northern wall of the Virgin Islands basin. High-resolution seismic profiles further indicate it to be a reasonable fault candidate. However, the fault orientation and the orientation of other subparallel faults in the area are more compatible with right-lateral motion. For the other possible source region, no clear disruption in the bathymetry or seismic profiles was found on the carbonate platform north of the basin.

  7. Stochastic strong ground motion simulations for the intermediate-depth earthquakes of the south Aegean subduction zone

    Science.gov (United States)

    Kkallas, Harris; Papazachos, Konstantinos; Boore, David; Margaris, Vasilis

    2015-04-01

    We have employed the stochastic finite-fault modelling approach of Motazedian and Atkinson (2005), as described by Boore (2009), for the simulation of Fourier spectra of the Intermediate-depth earthquakes of the south Aegean subduction zone. The stochastic finite-fault method is a practical tool for simulating ground motions of future earthquakes which requires region-specific source, path and site characterizations as input model parameters. For this reason we have used data from both acceleration-sensor and broadband velocity-sensor instruments from intermediate-depth earthquakes with magnitude of M 4.5-6.7 that occurred in the south Aegean subduction zone. Source mechanisms for intermediate-depth events of north Aegean subduction zone are either collected from published information or are constrained using the main faulting types from Kkallas et al. (2013). The attenuation parameters for simulations were adopted from Skarladoudis et al. (2013) and are based on regression analysis of a response spectra database. The site amplification functions for each soil class were adopted from Klimis et al., (1999), while the kappa values were constrained from the analysis of the EGELADOS network data from Ventouzi et al., (2013). The investigation of stress-drop values was based on simulations performed with the EXSIM code for several ranges of stress drop values and by comparing the results with the available Fourier spectra of intermediate-depth earthquakes. Significant differences regarding the strong-motion duration, which is determined from Husid plots (Husid, 1969), have been identified between the for-arc and along-arc stations due to the effect of the low-velocity/low-Q mantle wedge on the seismic wave propagation. In order to estimate appropriate values for the duration of P-waves, we have automatically picked P-S durations on the available seismograms. For the S-wave durations we have used the part of the seismograms starting from the S-arrivals and ending at the

  8. Network simulations of optical illusions

    Science.gov (United States)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  9. Hybrid Broadband Ground-Motion Simulation Using Scenario Earthquakes for the Istanbul Area

    KAUST Repository

    Reshi, Owais A.

    2016-04-13

    Seismic design, analysis and retrofitting of structures demand an intensive assessment of potential ground motions in seismically active regions. Peak ground motions and frequency content of seismic excitations effectively influence the behavior of structures. In regions of sparse ground motion records, ground-motion simulations provide the synthetic seismic records, which not only provide insight into the mechanisms of earthquakes but also help in improving some aspects of earthquake engineering. Broadband ground-motion simulation methods typically utilize physics-based modeling of source and path effects at low frequencies coupled with high frequency semi-stochastic methods. I apply the hybrid simulation method by Mai et al. (2010) to model several scenario earthquakes in the Marmara Sea, an area of high seismic hazard. Simulated ground motions were generated at 75 stations using systematically calibrated model parameters. The region-specific source, path and site model parameters were calibrated by simulating a w4.1 Marmara Sea earthquake that occurred on November 16, 2015 on the fault segment in the vicinity of Istanbul. The calibrated parameters were then used to simulate the scenario earthquakes with magnitudes w6.0, w6.25, w6.5 and w6.75 over the Marmara Sea fault. Effects of fault geometry, hypocenter location, slip distribution and rupture propagation were thoroughly studied to understand variability in ground motions. A rigorous analysis of waveforms reveal that these parameters are critical for determining the behavior of ground motions especially in the near-field. Comparison of simulated ground motion intensities with ground-motion prediction quations indicates the need of development of the region-specific ground-motion prediction equation for Istanbul area. Peak ground motion maps are presented to illustrate the shaking in the Istanbul area due to the scenario earthquakes. The southern part of Istanbul including Princes Islands show high amplitudes

  10. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  11. The Airport Network Flow Simulator.

    Science.gov (United States)

    1976-05-01

    The impact of investment at an individual airport is felt through-out the National Airport System by reduction of delays at other airports in the the system. A GPSS model was constructed to simulate the propagation of delays through a nine-airport sy...

  12. Effects of earthquake rupture shallowness and local soil conditions on simulated ground motions

    International Nuclear Information System (INIS)

    Apsel, Randy J.; Hadley, David M.; Hart, Robert S.

    1983-03-01

    The paucity of strong ground motion data in the Eastern U.S. (EUS), combined with well recognized differences in earthquake source depths and wave propagation characteristics between Eastern and Western U.S. (WUS) suggests that simulation studies will play a key role in assessing earthquake hazard in the East. This report summarizes an extensive simulation study of 5460 components of ground motion representing a model parameter study for magnitude, distance, source orientation, source depth and near-surface site conditions for a generic EUS crustal model. The simulation methodology represents a hybrid approach to modeling strong ground motion. Wave propagation is modeled with an efficient frequency-wavenumber integration algorithm. The source time function used for each grid element of a modeled fault is empirical, scaled from near-field accelerograms. This study finds that each model parameter has a significant influence on both the shape and amplitude of the simulated response spectra. The combined effect of all parameters predicts a dispersion of response spectral values that is consistent with strong ground motion observations. This study provides guidelines for scaling WUS data from shallow earthquakes to the source depth conditions more typical in the EUS. The modeled site conditions range from very soft soil to hard rock. To the extent that these general site conditions model a specific site, the simulated response spectral information can be used to either correct spectra to a site-specific environment or used to compare expected ground motions at different sites. (author)

  13. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    Science.gov (United States)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  14. Underwater Electromagnetic Sensor Networks, Part II: Localization and Network Simulations

    Directory of Open Access Journals (Sweden)

    Javier Zazo

    2016-12-01

    Full Text Available In the first part of the paper, we modeled and characterized the underwater radio channel in shallowwaters. In the second part,we analyze the application requirements for an underwaterwireless sensor network (U-WSN operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference, radio parameters (e.g., modulation scheme, bandwidth, transmit power, hardware limitations (e.g., clock drift, transmission buffer and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios.

  15. The 2008 West Bohemia earthquake swarm in the light of the WEBNET network

    Czech Academy of Sciences Publication Activity Database

    Fischer, T.; Horálek, Josef; Michálek, Jan; Boušková, Alena

    2010-01-01

    Roč. 14, č. 4 (2010), s. 665-682 ISSN 1383-4649 Grant - others:GA MŠk(CZ) specifický-výzkum; Norway Grants(NO) A/CZ0046/2/0015 Institutional research plan: CEZ:AV0Z30120515 Keywords : earthquake swarm * seismic network * seismicity Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.274, year: 2010

  16. Numerical simulation of the 1976 Ms7.8 Tangshan Earthquake

    Science.gov (United States)

    Li, Zhengbo; Chen, Xiaofei

    2017-04-01

    An Ms 7.8 earthquake happened in Tangshan in 1976, causing more than 240000 people death and almost destroying the whole city. Numerous studies indicated that the surface rupture zone extends 8 to 11 km in the south of Tangshan City. The fault system is composed with more than ten NE-trending right-lateral strike-slip left-stepping echelon faults, with a general strike direction of N30°E. However, recent scholars proposed that the surface ruptures appeared in a larger area. To simulate the rupture process closer to the real situation, the curvilinear grid finite difference method presented by Zhang et al. (2006, 2014) which can handle the free surface and the complex geometry were implemented to investigate the dynamic rupture and ground motion of Tangshan earthquake. With the data from field survey, seismic section, borehole and trenching results given by different studies, several fault geometry models were established. The intensity, the seismic waveform and the displacement resulted from the simulation of different models were compared with the observed data. The comparison of these models shows details of the rupture process of the Tangshan earthquake and implies super-shear may occur during the rupture, which is important for better understanding of this complicated rupture process and seismic hazard distributions of this earthquake.

  17. The Quake-Catcher Network: A Community-Led, Strong-Motion Network with Implications for Earthquake Advanced Alert

    Science.gov (United States)

    Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.; Jakka, R. S.; Chung, A. I.

    2009-12-01

    The goal of the Quake-Catcher Network (QCN) is to dramatically increase the number of strong-motion observations by exploiting recent advances in sensing technologies and cyberinfrastructure. Micro-Electro-Mechanical Systems (MEMS) triaxial accelerometers are very low cost (50-100), interface to any desktop computer via USB cable, and provide high-quality acceleration data. Preliminary shake table tests show the MEMS accelerometers can record high-fidelity seismic data and provide linear phase and amplitude response over a wide frequency range. Volunteer computing provides a mechanism to expand strong-motion seismology with minimal infrastructure costs, while promoting community participation in science. Volunteer computing also allows for rapid transfer of metadata, such as that used to rapidly determine the magnitude and location of an earthquake, from participating stations. QCN began distributing sensors and software to K-12 schools and the general public in April 2008 and has grown to roughly 1000 stations. Initial analysis shows metadata are received within 1-14 seconds from the observation of a trigger; the larger data latencies are correlated with greater server-station distances. Currently, we are testing a series of triggering algorithms to maximize the number of earthquakes captured while minimizing false triggers. We are also testing algorithms to automatically detect P- and S-wave arrivals in real time. Trigger times, wave amplitude, and station information are currently uploaded to the server for each trigger. Future work will identify additional metadata useful for quickly determining earthquake location and magnitude. The increased strong-motion observations made possible by QCN will greatly augment the capability of seismic networks to quickly estimate the location and magnitude of an earthquake for advanced alert to the public. In addition, the dense waveform observations will provide improved source imaging of a rupture in near-real-time. These

  18. 3D Dynamic Rupture Simulations Across Interacting Faults: the Mw7.0, 2010, Haiti Earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.; Aagaard, B.

    2014-12-01

    The mechanisms controlling rupture propagation between fault segments during an earthquake are key to the hazard posed by fault systems. Rupture initiation on a fault segment sometimes transfers to a larger fault, resulting in a significant event (e.g.i, 2002 M7.9Denali and 2010 M7.1 Darfield earthquakes). In other cases rupture is constrained to the initial segment and does not transfer to nearby faults, resulting in events of moderate magnitude. This is the case of the 1989 M6.9 Loma Prieta and 2010 M7.0 Haiti earthquakes which initiated on reverse faults abutting against a major strike-slip plate boundary fault but did not propagate onto it. Here we investigatethe rupture dynamics of the Haiti earthquake, seeking to understand why rupture propagated across two segments of the Léogâne fault but did not propagate to the adjacenent Enriquillo Plantain Garden Fault, the major 200 km long plate boundary fault cutting through southern Haiti. We use a Finite Element Model to simulate the nucleation and propagation of rupture on the Léogâne fault, varying friction and background stress to determine the parameter set that best explains the observed earthquake sequence. The best-fit simulation is in remarkable agreement with several finite fault inversions and predicts ground displacement in very good agreement with geodetic and geological observations. The two slip patches inferred from finite-fault inversions are explained by the successive rupture of two fault segments oriented favorably with respect to the rupture propagation, while the geometry of the Enriquillo fault did not allow shear stress to reach failure. Although our simulation results replicate well the ground deformation consistent with the geodetic surface observation but convolving the ground motion with the soil amplification from the microzonation study will correctly account for the heterogeneity of the PGA throughout the rupture area.

  19. Preparation of Synthetic Earthquake Catalogue and Tsunami Hazard Curves in Marmara Sea using Monte Carlo Simulations

    Science.gov (United States)

    Bayraktar, Başak; Özer Sözdinler, Ceren; Necmioǧlu, Öcal; Meral Özel, Nurcan

    2017-04-01

    The Marmara Sea and its surrounding is one of the most populated areas in Turkey. Many densely populated cities, such as megacity Istanbul with a population of more than 14 million, a great number of industrial facilities in largest capacity and potential, refineries, ports and harbors are located along the coasts of Marmara Sea. The region is highly seismically active. There has been a wide range of studies in this region regarding the fault mechanisms, seismic activities, earthquakes and triggered tsunamis in the Sea of Marmara. The historical documents reveal that the region has been experienced many earthquakes and tsunamis in the past. According to Altinok et al. (2011), 35 tsunami events happened in Marmara Sea between BC 330 and 1999. As earthquakes are expected in Marmara Sea with the break of segments of North Anatolian Fault (NAF) in the future, the region should be investigated in terms of the possibility of tsunamis by the occurrence of earthquakes in specific return periods. This study aims to make probabilistic tsunami hazard analysis in Marmara Sea. For this purpose, the possible sources of tsunami scenarios are specified by compiling the earthquake catalogues, historical records and scientific studies conducted in the region. After compiling all this data, a synthetic earthquake and tsunami catalogue are prepared using Monte Carlo simulations. For specific return periods, the possible epicenters, rupture lengths, widths and displacements are determined with Monte Carlo simulations assuming the angles of fault segments as deterministic. For each earthquake of synthetic catalogue, the tsunami wave heights will be calculated at specific locations along Marmara Sea. As a further objective, this study will determine the tsunami hazard curves for specific locations in Marmara Sea including the tsunami wave heights and their probability of exceedance. This work is supported by SATREPS-MarDim Project (Earthquake and Tsunami Disaster Mitigation in the

  20. Dynamic simulation of regulatory networks using SQUAD

    Directory of Open Access Journals (Sweden)

    Xenarios Ioannis

    2007-11-01

    Full Text Available Abstract Background The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. Results We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. Conclusion The simulation of regulatory networks aims at predicting the behavior of a whole system when subject

  1. The Italian National Seismic Network and the earthquake and tsunami monitoring and surveillance systems

    Directory of Open Access Journals (Sweden)

    A. Michelini

    2016-11-01

    Full Text Available The Istituto Nazionale di Geofisica e Vulcanologia (INGV is an Italian research institution, with focus on Earth Sciences. INGV runs the Italian National Seismic Network (Rete Sismica Nazionale, RSN and other networks at national scale for monitoring earthquakes and tsunami as a part of the National Civil Protection System coordinated by the Italian Department of Civil Protection (Dipartimento di Protezione Civile, DPC. RSN is composed of about 400 stations, mainly broadband, installed in the Country and in the surrounding regions; about 110 stations feature also co-located strong motion instruments, and about 180 have GPS receivers and belong to the National GPS network (Rete Integrata Nazionale GPS, RING. The data acquisition system was designed to accomplish, in near-real-time, automatic earthquake detection, hypocenter and magnitude determination, moment tensors, shake maps and other products of interest for DPC. Database archiving of all parametric results are closely linked to the existing procedures of the INGV seismic monitoring environment and surveillance procedures. INGV is one of the primary nodes of ORFEUS (Observatories & Research Facilities for European Seismology EIDA (European Integrated Data Archive for the archiving and distribution of continuous, quality checked seismic data. The strong motion network data are archived and distributed both in EIDA and in event based archives; GPS data, from the RING network are also archived, analyzed and distributed at INGV. Overall, the Italian earthquake surveillance service provides, in quasi real-time, hypocenter parameters to the DPC. These are then revised routinely by the analysts of the Italian Seismic Bulletin (Bollettino Sismico Italiano, BSI. The results are published on the web, these are available to both the scientific community and the general public. The INGV surveillance includes a pre-operational tsunami alert service since INGV is one of the Tsunami Service providers of

  2. The Quake-Catcher Network: Improving Earthquake Strong Motion Observations Through Community Engagement

    Science.gov (United States)

    Cochran, E. S.; Lawrence, J. F.; Christensen, C. M.; Chung, A. I.; Neighbors, C.; Saltzman, J.

    2010-12-01

    The Quake-Catcher Network (QCN) involves the community in strong motion data collection by utilizing volunteer computing techniques and low-cost MEMS accelerometers. Volunteer computing provides a mechanism to expand strong-motion seismology with minimal infrastructure costs, while promoting community participation in science. Micro-Electro-Mechanical Systems (MEMS) triaxial accelerometers can be attached to a desktop computer via USB and are internal to many laptops. Preliminary shake table tests show the MEMS accelerometers can record high-quality seismic data with instrument response similar to research-grade strong-motion sensors. QCN began distributing sensors and software to K-12 schools and the general public in April 2008 and has grown to roughly 1500 stations worldwide. We also recently tested whether sensors could be quickly deployed as part of a Rapid Aftershock Mobilization Program (RAMP) following the 2010 M8.8 Maule, Chile earthquake. Volunteers are recruited through media reports, web-based sensor request forms, as well as social networking sites. Using data collected to date, we examine whether a distributed sensing network can provide valuable seismic data for earthquake detection and characterization while promoting community participation in earthquake science. We utilize client-side triggering algorithms to determine when significant ground shaking occurs and this metadata is sent to the main QCN server. On average, trigger metadata are received within 1-10 seconds from the observation of a trigger; the larger data latencies are correlated with greater server-station distances. When triggers are detected, we determine if the triggers correlate to others in the network using spatial and temporal clustering of incoming trigger information. If a minimum number of triggers are detected then a QCN-event is declared and an initial earthquake location and magnitude is estimated. Initial analysis suggests that the estimated locations and magnitudes are

  3. Efficient simulation of a tandem Jackson network

    NARCIS (Netherlands)

    Kroese, Dirk; Nicola, V.F.

    2002-01-01

    The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds

  4. LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR

    Science.gov (United States)

    Gibson, J.

    1994-01-01

    The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two

  5. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  6. Evaluation of Seismic Rupture Models for the 2011 Tohoku-Oki Earthquake Using Tsunami Simulation

    Directory of Open Access Journals (Sweden)

    Ming-Da Chiou

    2013-01-01

    Full Text Available Developing a realistic, three-dimensional rupture model of the large offshore earthquake is difficult to accomplish directly through band-limited ground-motion observations. A potential indirect method is using a tsunami simulation to verify the rupture model in reverse because the initial conditions of the associated tsunamis are caused by a coseismic seafloor displacement correlating to the rupture pattern along the main faulting. In this study, five well-developed rupture models for the 2011 Tohoku-Oki earthquake were adopted to evaluate differences in simulated tsunamis and various rupture asperities. The leading wave of the simulated tsunamis triggered by the seafloor displacement in Yamazaki et al. (2011 model resulted in the smallest root-mean-squared difference (~0.082 m on average from the records of the eight DART (Deep-ocean Assessment and Reporting of Tsunamis stations. This indicates that the main seismic rupture during the 2011 Tohoku earthquake should occur in a large shallow slip in a narrow range adjacent to the Japan trench. This study also quantified the influences of ocean stratification and tides which are normally overlooked in tsunami simulations. The discrepancy between the simulations with and without stratification was less than 5% of the first peak wave height at the eight DART stations. The simulations, run with and without the presence of tides, resulted in a ~1% discrepancy in the height of the leading wave. Because simulations accounting for tides and stratification are time-consuming and their influences are negligible, particularly in the first tsunami wave, the two factors can be ignored in a tsunami prediction for practical purposes.

  7. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  8. Simulating Autonomous Telecommunication Networks for Space Exploration

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.

    2008-01-01

    Currently, most interplanetary telecommunication systems require human intervention for command and control. However, considering the range from near Earth to deep space missions, combined with the increase in the number of nodes and advancements in processing capabilities, the benefits from communication autonomy will be immense. Likewise, greater mission science autonomy brings the need for unscheduled, unpredictable communication and network routing. While the terrestrial Internet protocols are highly developed their suitability for space exploration has been questioned. JPL has developed the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to help characterize network designs and protocols. The results will allow future mission planners to better understand the trade offs of communication protocols. This paper discusses various issues with interplanetary network and simulation results of interplanetary networking protocols.

  9. Validation of simulated earthquake ground motions based on evolution of intensity and frequency content

    Science.gov (United States)

    Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin

    2015-01-01

    Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.

  10. Simulated tsunami inundation for a range of Cascadia megathrust earthquake scenarios at Bandon, Oregon, USA

    Science.gov (United States)

    Witter, Robert C.; Zhang, Yinglong J.; Wang, Kelin; Priest, George R.; Goldfinger, Chris; Stimely, Laura; English, John T.; Ferro, Paul A.

    2013-01-01

    Characterizations of tsunami hazards along the Cascadia subduction zone hinge on uncertainties in megathrust rupture models used for simulating tsunami inundation. To explore these uncertainties, we constructed 15 megathrust earthquake scenarios using rupture models that supply the initial conditions for tsunami simulations at Bandon, Oregon. Tsunami inundation varies with the amount and distribution of fault slip assigned to rupture models, including models where slip is partitioned to a splay fault in the accretionary wedge and models that vary the updip limit of slip on a buried fault. Constraints on fault slip come from onshore and offshore paleoseismological evidence. We rank each rupture model using a logic tree that evaluates a model’s consistency with geological and geophysical data. The scenarios provide inputs to a hydrodynamic model, SELFE, used to simulate tsunami generation, propagation, and inundation on unstructured grids with earthquakes with 9–44 m slip and Mw 8.7–9.2. Simulated tsunami inundation agrees with sparse deposits left by the A.D. 1700 and older tsunamis. Tsunami simulations for large (22–30 m slip) and medium (14–19 m slip) splay fault scenarios encompass 80%–95% of all inundation scenarios and provide reasonable guidelines for land-use planning and coastal development. The maximum tsunami inundation simulated for the greatest splay fault scenario (36–44 m slip) can help to guide development of local tsunami evacuation zones.

  11. PREDICTION OF SITE RESPONSE SPECTRUM UNDER EARTHQUAKE VIBRATION USING AN OPTIMIZED DEVELOPED ARTIFICIAL NEURAL NETWORK MODEL

    Directory of Open Access Journals (Sweden)

    Reza Esmaeilabadi

    2016-06-01

    Full Text Available Site response spectrum is one of the key factors to determine the maximum acceleration and displacement, as well as structure behavior analysis during earthquake vibrations. The main objective of this paper is to develop an optimized model based on artificial neural network (ANN using five different training algorithms to predict nonlinear site response spectrum subjected to Silakhor earthquake vibrations is. The model output was tested for a specified area in west of Iran. The performance and quality of optimized model under all training algorithms have been examined by various statistical, analytical and graph analyses criteria as well as a comparison with numerical methods. The observed adaptabilities in results indicate a feasible and satisfactory engineering alternative method for predicting the analysis of nonlinear site response.

  12. Semantic and Social Networks Comparison for the Haiti Earthquake Relief Operations from APAN Data Sources using Lexical Link Analysis (LLA)

    Science.gov (United States)

    2012-06-01

    TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Semantic and Social Networks Comparison for the Haiti Earthquake Relief...revealed the collaborations among military, government, and civil stakeholders in the crisis via social networks , it also recorded the content that...semantic networks suggest more potential collaboration when compared to social networks (Section 3.3.2)? 3 Approaches 3.1 Lexical Link Analysis (LLA

  13. Earthquake location determination using data from DOMERAPI and BMKG seismic networks: A preliminary result of DOMERAPI project

    Energy Technology Data Exchange (ETDEWEB)

    Ramdhan, Mohamad [Study Program of Earth Science, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Agency for Meteorology, Climatology and Geophysics of Indonesia (BMKG) Jl. Angkasa 1 No. 2 Kemayoran, Jakarta Pusat, 10720 (Indonesia); Nugraha, Andri Dian; Widiyantoro, Sri [Global Geophysics Research Group, Faculty of Mining and Petroleum Engineering, Institut TeknologiBandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia); Métaxian, Jean-Philippe [Institut de Recherche pour le Développement (IRD) (France); Valencia, Ayunda Aulia, E-mail: mohamad.ramdhan@bmkg.go.id [Study Program of Geophysical Engineering, Institut Teknologi Bandung, Jl. Ganesa 10, Bandung, 40132 (Indonesia)

    2015-04-24

    DOMERAPI project has been conducted to comprehensively study the internal structure of Merapi volcano, especially about deep structural features beneath the volcano. DOMERAPI earthquake monitoring network consists of 46 broad-band seismometers installed around the Merapi volcano. Earthquake hypocenter determination is a very important step for further studies, such as hypocenter relocation and seismic tomographic imaging. Ray paths from earthquake events occurring outside the Merapi region can be utilized to delineate the deep magma structure. Earthquakes occurring outside the DOMERAPI seismic network will produce an azimuthal gap greater than 180{sup 0}. Owing to this situation the stations from BMKG seismic network can be used jointly to minimize the azimuthal gap. We identified earthquake events manually and carefully, and then picked arrival times of P and S waves. The data from the DOMERAPI seismic network were combined with the BMKG data catalogue to determine earthquake events outside the Merapi region. For future work, we will also use the BPPTKG (Center for Research and Development of Geological Disaster Technology) data catalogue in order to study shallow structures beneath the Merapi volcano. The application of all data catalogues will provide good information as input for further advanced studies and volcano hazards mitigation.

  14. Earthquake location determination using data from DOMERAPI and BMKG seismic networks: A preliminary result of DOMERAPI project

    International Nuclear Information System (INIS)

    Ramdhan, Mohamad; Nugraha, Andri Dian; Widiyantoro, Sri; Métaxian, Jean-Philippe; Valencia, Ayunda Aulia

    2015-01-01

    DOMERAPI project has been conducted to comprehensively study the internal structure of Merapi volcano, especially about deep structural features beneath the volcano. DOMERAPI earthquake monitoring network consists of 46 broad-band seismometers installed around the Merapi volcano. Earthquake hypocenter determination is a very important step for further studies, such as hypocenter relocation and seismic tomographic imaging. Ray paths from earthquake events occurring outside the Merapi region can be utilized to delineate the deep magma structure. Earthquakes occurring outside the DOMERAPI seismic network will produce an azimuthal gap greater than 180 0 . Owing to this situation the stations from BMKG seismic network can be used jointly to minimize the azimuthal gap. We identified earthquake events manually and carefully, and then picked arrival times of P and S waves. The data from the DOMERAPI seismic network were combined with the BMKG data catalogue to determine earthquake events outside the Merapi region. For future work, we will also use the BPPTKG (Center for Research and Development of Geological Disaster Technology) data catalogue in order to study shallow structures beneath the Merapi volcano. The application of all data catalogues will provide good information as input for further advanced studies and volcano hazards mitigation

  15. Time-history simulation of civil architecture earthquake disaster relief- based on the three-dimensional dynamic finite element method

    Directory of Open Access Journals (Sweden)

    Liu Bing

    2014-10-01

    Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.

  16. Kinematic Earthquake Ground‐Motion Simulations on Listric Normal Faults

    KAUST Repository

    Passone, Luca

    2017-11-28

    Complex finite-faulting source processes have important consequences for near-source ground motions, but empirical ground-motion prediction equations still lack near-source data and hence cannot fully capture near-fault shaking effects. Using a simulation-based approach, we study the effects of specific source parameterizations on near-field ground motions where empirical data are limited. Here, we investigate the effects of fault listricity through near-field kinematic ground-motion simulations. Listric faults are defined as curved faults in which dip decreases with depth, resulting in a concave upward profile. The listric profiles used in this article are built by applying a specific shape function and varying the initial dip and the degree of listricity. Furthermore, we consider variable rupture speed and slip distribution to generate ensembles of kinematic source models. These ensembles are then used in a generalized 3D finite-difference method to compute synthetic seismograms; the corresponding shaking levels are then compared in terms of peak ground velocities (PGVs) to quantify the effects of breaking fault planarity. Our results show two general features: (1) as listricity increases, the PGVs decrease on the footwall and increase on the hanging wall, and (2) constructive interference of seismic waves emanated from the listric fault causes PGVs over two times higher than those observed for the planar fault. Our results are relevant for seismic hazard assessment for near-fault areas for which observations are scarce, such as in the listric Campotosto fault (Italy) located in an active seismic area under a dam.

  17. Three-dimensional ground-motion simulations of earthquakes for the Hanford area, Washington

    Science.gov (United States)

    Frankel, Arthur; Thorne, Paul; Rohay, Alan

    2014-01-01

    This report describes the results of ground-motion simulations of earthquakes using three-dimensional (3D) and one-dimensional (1D) crustal models conducted for the probabilistic seismic hazard assessment (PSHA) of the Hanford facility, Washington, under the Senior Seismic Hazard Analysis Committee (SSHAC) guidelines. The first portion of this report demonstrates that the 3D seismic velocity model for the area produces synthetic seismograms with characteristics (spectral response values, duration) that better match those of the observed recordings of local earthquakes, compared to a 1D model with horizontal layers. The second part of the report compares the response spectra of synthetics from 3D and 1D models for moment magnitude (M) 6.6–6.8 earthquakes on three nearby faults and for a dipping plane wave source meant to approximate regional S-waves from a Cascadia great earthquake. The 1D models are specific to each site used for the PSHA. The use of the 3D model produces spectral response accelerations at periods of 0.5–2.0 seconds as much as a factor of 4.5 greater than those from the 1D models for the crustal fault sources. The spectral accelerations of the 3D synthetics for the Cascadia plane-wave source are as much as a factor of 9 greater than those from the 1D models. The differences between the spectral accelerations for the 3D and 1D models are most pronounced for sites with thicker supra-basalt sediments and for stations with earthquakes on the Rattlesnake Hills fault and for the Cascadia plane-wave source.

  18. The numerical simulation on ionospheric perturbations in electric field before large earthquakes

    Directory of Open Access Journals (Sweden)

    S. F. Zhao

    2014-12-01

    Full Text Available Many observational results have shown electromagnetic abnormality in the ionosphere before large earthquakes. The theoretical simulation can help us to understand the internal mechanism of these anomalous electromagnetic signals resulted from seismic regions. In this paper, the horizontal and vertical components of electric and magnetic field at the topside ionosphere are simulated by using the full wave method that is based on an improved transfer matrix method in the lossy anisotropic horizontally stratified ionosphere. Taken account into two earthquakes with electric field perturbations recorded by the DEMETER satellite, the numerical results reveal that the propagation and penetration of ULF (ultra-low-frequency electromagnetic waves into the ionosphere is related to the spatial distribution of electron and ion densities at different time and locations, in which the ion density has less effect than electron density on the field intensity. Compared with different frequency signals, the minimum values of electric and magnetic field excited by earthquakes can be detected by satellite in current detection capability have also been calculated, and the lower frequency wave can be detected easier.

  19. Ground motion-simulations of 1811-1812 New Madrid earthquakes, central United States

    Science.gov (United States)

    Ramirez-Guzman, L.; Graves, Robert; Olsen, Kim B.; Boyd, Oliver; Cramer, Chris H.; Hartzell, Stephen; Ni, Sidao; Somerville, Paul G.; Williams, Robert; Zhong, Jinquan

    2015-01-01

    We performed a suite of numerical simulations based on the 1811–1812 New Madrid seismic zone (NMSZ) earthquakes, which demonstrate the importance of 3D geologic structure and rupture directivity on the ground‐motion response throughout a broad region of the central United States (CUS) for these events. Our simulation set consists of 20 hypothetical earthquakes located along two faults associated with the current seismicity trends in the NMSZ. The hypothetical scenarios range in magnitude from M 7.0 to 7.7 and consider various epicenters, slip distributions, and rupture characterization approaches. The low‐frequency component of our simulations was computed deterministically up to a frequency of 1 Hz using a regional 3D seismic velocity model and was combined with higher‐frequency motions calculated for a 1D medium to generate broadband synthetics (0–40 Hz in some cases). For strike‐slip earthquakes located on the southwest–northeast‐striking NMSZ axial arm of seismicity, our simulations show 2–10 s period energy channeling along the trend of the Reelfoot rift and focusing strong shaking northeast toward Paducah, Kentucky, and Evansville, Indiana, and southwest toward Little Rock, Arkansas. These waveguide effects are further accentuated by rupture directivity such that an event with a western epicenter creates strong amplification toward the northeast, whereas an eastern epicenter creates strong amplification toward the southwest. These effects are not as prevalent for simulations on the reverse‐mechanism Reelfoot fault, and large peak ground velocities (>40  cm/s) are typically confined to the near‐source region along the up‐dip projection of the fault. Nonetheless, these basin response and rupture directivity effects have a significant impact on the pattern and level of the estimated intensities, which leads to additional uncertainty not previously considered in magnitude estimates of the 1811–1812 sequence based only on historical

  20. Non-hydrostatic simulation of tsunamis: application to the April 2014 Iquique earthquake

    Science.gov (United States)

    Aïssiouene, Nora; Bristeau, Marie-Odile; Godlewski, Edwige; Mangeney, Anne; Parés, Carlos; Sainte-Marie, Jacques; Vallée, Martin

    2017-04-01

    The quantification of non-hydrostatic effects in tsunami modelling is still an open issue. We present here a new numerical method to solve the two-dimensional dispersive shallow water system with topography proposed recently by [3]. This model is a depth averaged Euler system and takes into account a non-hydrostatic pressure. Interestingly, this model is close to but not the same as the Green-Naghdi model. An incompressible system has to be solved to find the numerical solution of this model. The solution method [1,2] is based on a prediction-correction scheme initially introduced by Chorin-Temam [4] for the Navier-Stokes system. The prediction part leads to solving a shallow water system for which we use finite volume methods, while the correction part leads to solving a mixed problem in velocity and pressure. For the correction part, we apply a finite element method with compatible spaces on unstructured grids. Several numerical tests are performed to evaluate the efficiency of the proposed method, in particular, comparisons with analytical solutions are given. Finally we simulate the tsunami generated by the Iquique earthquake that occured on April 1 2014 and compare the simulation with the tsunami data at two DART stations for both hydrostatic and non-hydrostatic models. N. Aissiouene, M.-O. Bristeau, E. Godlewski, and J. Sainte-Marie. A combined finite volume - finite element scheme for a dispersive shallow water system. Networks and Heterogeneous Media, 11(1):1-27, 2016. N. Aissiouene, M. O. Bristeau, E. Godlewski, and J. Sainte-Marie. A robust and stable numerical scheme for a depth-averaged euler system. Submitted, 2016. M.-O. Bristeau, A. Mangeney, J. Sainte-Marie, and N. Seguin. An energy-consistent depth-averaged euler system: Derivation and properties. Discrete and Continuous Dynamical Systems - Series B, 20(4):961-988, 2015. R. Rannacher. On Chorin's projection method for the incompressible {N}avier-{S}tokes equations. In G. Heywood, John, K. Masuda, R

  1. Insuring against earthquakes: simulating the cost-effectiveness of disaster preparedness.

    Science.gov (United States)

    de Hoop, Thomas; Ruben, Ruerd

    2010-04-01

    Ex-ante measures to improve risk preparedness for natural disasters are generally considered to be more effective than ex-post measures. Nevertheless, most resources are allocated after an event in geographical areas that are vulnerable to natural disasters. This paper analyses the cost-effectiveness of ex-ante adaptation measures in the wake of earthquakes and provides an assessment of the future role of private and public agencies in disaster risk management. The study uses a simulation model approach to evaluate consumption losses after earthquakes under different scenarios of intervention. Particular attention is given to the role of activity diversification measures in enhancing disaster preparedness and the contributions of (targeted) microcredit and education programmes for reconstruction following a disaster. Whereas the former measures are far more cost-effective, missing markets and perverse incentives tend to make ex-post measures a preferred option, thus occasioning underinvestment in ex-ante adaptation initiatives.

  2. An Earthquake Source Ontology for Seismic Hazard Analysis and Ground Motion Simulation

    Science.gov (United States)

    Zechar, J. D.; Jordan, T. H.; Gil, Y.; Ratnakar, V.

    2005-12-01

    Representation of the earthquake source is an important element in seismic hazard analysis and earthquake simulations. Source models span a range of conceptual complexity - from simple time-independent point sources to extended fault slip distributions. Further computational complexity arises because the seismological community has established so many source description formats and variations thereof; what this means is that conceptually equivalent source models are often expressed in different ways. Despite the resultant practical difficulties, there exists a rich semantic vocabulary for working with earthquake sources. For these reasons, we feel it is appropriate to create a semantic model of earthquake sources using an ontology, a computer science tool from the field of knowledge representation. Unlike the domain of most ontology work to date, earthquake sources can be described by a very precise mathematical framework. Another uniqueness associated with developing such an ontology is that earthquake sources are often used as computational objects. A seismologist generally wants more than to simply construct a source and have it be well-formed and properly described; additionally, the source will be used for performing calculations. Representation and manipulation of complex mathematical objects presents a challenge to the ontology development community. In order to enable simulations involving many different types of source models, we have completed preliminary development of a seismic point source ontology. The use of an ontology to represent knowledge provides machine interpretability and the ability to validate logical consistency and completeness. Our ontology, encoded using the OWL Web Ontology Language - a standard from the World Wide Web Consortium, contains the conceptual definitions and relationships necessary for source translation services. For example, specification of strike, dip, rake, and seismic moment will automatically translate into a double

  3. Neural network based tomographic approach to detect earthquake-related ionospheric anomalies

    Directory of Open Access Journals (Sweden)

    S. Hirooka

    2011-08-01

    Full Text Available A tomographic approach is used to investigate the fine structure of electron density in the ionosphere. In the present paper, the Residual Minimization Training Neural Network (RMTNN method is selected as the ionospheric tomography with which to investigate the detailed structure that may be associated with earthquakes. The 2007 Southern Sumatra earthquake (M = 8.5 was selected because significant decreases in the Total Electron Content (TEC have been confirmed by GPS and global ionosphere map (GIM analyses. The results of the RMTNN approach are consistent with those of TEC approaches. With respect to the analyzed earthquake, we observed significant decreases at heights of 250–400 km, especially at 330 km. However, the height that yields the maximum electron density does not change. In the obtained structures, the regions of decrease are located on the southwest and southeast sides of the Integrated Electron Content (IEC (altitudes in the range of 400–550 km and on the southern side of the IEC (altitudes in the range of 250–400 km. The global tendency is that the decreased region expands to the east with increasing altitude and concentrates in the Southern hemisphere over the epicenter. These results indicate that the RMTNN method is applicable to the estimation of ionospheric electron density.

  4. Brief communication "Monitoring ionospheric variations before earthquakes using the vertical and oblique sounding network over China"

    Directory of Open Access Journals (Sweden)

    Z. Wu

    2011-04-01

    Full Text Available The problem of earthquake prediction has stimulated the research for correlation between seismic activity and ionospheric anomaly. Many observations have shown the existence of anomaly of critical frequency of ionospheric F-region, foF2, before earthquake onset. Ionospheric sounding has been conducted routinely for more than 60 years in China by the China Research Institute of Radiowave Propagation (CRIRP, and deveoloped a very powerful ability to observe the ionosphere. In this paper, we briefly describe the anomalous variation of the foF2 before Ms8.0 Wenchuan earthquake (occurred on 12 May 2008 at 14:28 LT; 31.00° N, 103.40° E, which is a sign of the great interest arising in the seismo-ionospheric investigation of Chinese researchers. Furthermore, we introduce the routine work on seismo-ionospheric anomaly by the ground based high-resolution ionospheric observation (GBHIO network comprising 5 vertical and 20 oblique sounding stations.

  5. Rainfall and earthquake-induced landslide susceptibility assessment using GIS and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Y. Li

    2012-08-01

    Full Text Available A GIS-based method for the assessment of landslide susceptibility in a selected area of Qingchuan County in China is proposed by using the back-propagation Artificial Neural Network model (ANN. Landslide inventory was derived from field investigation and aerial photo interpretation. 473 landslides occurred before the Wenchuan earthquake (which were thought as rainfall-induced landslides (RIL in this study, and 885 earthquake-induced landslides (EIL were recorded into the landslide inventory map. To understand the different impacts of rainfall and earthquake on landslide occurrence, we first compared the variations between landslide spatial distribution and conditioning factors. Then, we compared the weight variation of each conditioning factor derived by adjusting ANN structure and factors combination respectively. Last, the weight of each factor derived from the best prediction model was applied to the entire study area to produce landslide susceptibility maps.

    Results show that slope gradient has the highest weight for landslide susceptibility mapping for both RIL and EIL. The RIL model built with four different factors (slope gradient, elevation, slope height and distance to the stream shows the best success rate of 93%; the EIL model built with five different factors (slope gradient, elevation, slope height, distance to the stream and distance to the fault has the best success rate of 98%. Furthermore, the EIL data was used to verify the RIL model and the success rate is 92%; the RIL data was used to verify the EIL model and the success rate is 53%.

  6. Simulation of developing human neuronal cell networks.

    Science.gov (United States)

    Lenk, Kerstin; Priwitzer, Barbara; Ylä-Outinen, Laura; Tietz, Lukas H B; Narkilahti, Susanna; Hyttinen, Jari A K

    2016-08-30

    Microelectrode array (MEA) is a widely used technique to study for example the functional properties of neuronal networks derived from human embryonic stem cells (hESC-NN). With hESC-NN, we can investigate the earliest developmental stages of neuronal network formation in the human brain. In this paper, we propose an in silico model of maturating hESC-NNs based on a phenomenological model called INEX. We focus on simulations of the development of bursts in hESC-NNs, which are the main feature of neuronal activation patterns. The model was developed with data from developing hESC-NN recordings on MEAs which showed increase in the neuronal activity during the investigated six measurement time points in the experimental and simulated data. Our simulations suggest that the maturation process of hESC-NN, resulting in the formation of bursts, can be explained by the development of synapses. Moreover, spike and burst rate both decreased at the last measurement time point suggesting a pruning of synapses as the weak ones are removed. To conclude, our model reflects the assumption that the interaction between excitatory and inhibitory neurons during the maturation of a neuronal network and the spontaneous emergence of bursts are due to increased connectivity caused by the forming of new synapses.

  7. Morphologic analysis and numerical simulation of the earthquake-induced Jiufengershan debris avalanche, central Taiwan

    Science.gov (United States)

    Lin, Chiu-Fen; Chiang, Yi-Lin; Chang, Kuo-Jen

    2013-04-01

    Landslides pose significant threats to communities and infrastructure in Taiwan. Among possible triggering factors, heavy precipitation and earthquakes are the most important. The 21st September 1999 Chi-Chi earthquake, ML=7.3 with depth of eight kilometers, was associated with the reactivation of the Chelungpu thrust fault. The earthquake caused massive landslides; the Jiufengershan is one of most important event. The Jiufengershan landslide area which located at the western limb of the Taanshan syncline is a typical dip-slope failure. The main rock formation consists of inter-bed sandstone and shale. About 43 million m3 rock mass moved downward slope to the Sezikeng river valley and formed three dam lakes, the landslide resulted in 39 deaths, and it is a serious threat that will damage public's life and property. This study is divided into two phases: The first phase is to use the aerial photographs before and after the Chichi earthquake to construct the different periods of the Digital Terrain Model (DTM), the 2m resolution LiDAR images taken in 2002 are integrated with this study. The precision and the accuracy of the aerial triangulation parameter were estimated according to the post landslide LiDAR DSM. In the meanwhile, the quality of the aerial photo derived DSM is analyzed accordingly. In order to calculate the landslide cut-and-fill volume, to simulate the landslide dynamic process, the DSM before earthquake is adjusted according to the LiDAR-DSM. Thus the calculated the slid volume and the deposit volume is about 39 and 47 million cubic meter, respectively. The second phase is to simulate the landslide behavior of the Jiufengershan by using the 3D Particle Flow Code (PFC3D) in order to acquire the parameters which play an important role. In this research, we tested and analyzed different parameters, such as: wall stiffness, particle parameters, pore water pressure, wall friction coefficient, and particle elements bonded parameters. Through using the

  8. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  9. 3-D cell model simulation of the inland earthquake generation pattern in Southwest Japan during the Nankai earthquake cycles in a layered viscoelastic medium

    Science.gov (United States)

    Shikakura, Y.; Fukahata, Y.; Mitsui, N.; Hirahara, K.

    2010-12-01

    In southwest Japan, there are a lot of inland active faults, such as Median Tectonic Line, the Neodani, Atotsugawa, and Rokko-Awaji faults. The earthquakes in these faults are mainly generated by the east-west compressive stress due to the Pacific plate subduction. However, because the activity of inland earthquakes increases in the period from 50 years before to 10 years after the great interplate earthquakes (Hori & Oike, 1999), earthquake generations in these faults are affected by the interplate earthquakes and collision of the Izu volcanic arc due to the Philippine Sea (PHS) plate subduction. To evaluate the effects quantitatively, we model the stress accumulation/release processes at the inland active faults in southwest Japan. For this problem, Pollitz & Sacks (1997), Hyodo & Hirahara (2004), and Hirahara (2007) evaluated the viscoelastic effect of great interplate earthquakes at the PHS plate subduction by examining Coulomb Failure Function ΔCFF. We here simulate earthquake generation pattern at inland active faults in southwest Japan by solving the boundary value problem. The governing equations are the slip response function and the friction constitutive law. The boundary conditions are east-west compressive stress due to the Pacific plate subduction, the interplate earthquakes and collision of the Izu volcanic arc due to the PHS plate subduction, and the geometry of plate interfaces and inland active faults. We here compute the slip response function in an elastic-viscoelastic stratified medium. We employ quasi-static viscoelastic slip response functions for point sources by Fukahata & Matsu’ura (2006). To obtain accurate slip response functions for rectangular sources effectively, we apply the Gauss-Legendre integration scheme. We use the approximate solutions under the assumption that slip response exponentially decays with time, since it is difficult to calculate viscoelastic slip response functions for all time steps. To approximate quasi

  10. Assessing Urban Streets Network Vulnerability against Earthquake Using GIS - Case Study: 6TH Zone of Tehran

    Science.gov (United States)

    Rastegar, A.

    2017-09-01

    Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population) and 20 km2 area (3.2% of city area), is one of the main municipal zones of Tehran (Iran center of statistics, 2006). Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  11. Wideband simulation of earthquake ground motion by a spectrum-matching, multiple-pulse technique

    International Nuclear Information System (INIS)

    Gusev, A.; Pavlov, V.

    2006-04-01

    To simulate earthquake ground motion, we combine a multiple-point stochastic earthquake fault model and a suite of Green functions. Conceptually, our source model generalizes the classic one of Haskell (1966). At any time instant, slip occurs over a narrow strip that sweeps the fault area at a (spatially variable) velocity. This behavior defines seismic signals at lower frequencies (LF), and describes directivity effects. High-frequency (HF) behavior of source signal is defined by local slip history, assumed to be a short segment of pulsed noise. For calculations, this model is discretized as a grid of point subsources. Subsource moment rate time histories, in their LF part, are smooth pulses whose duration equals to the rise time. In their HF part, they are segments of non-Gaussian noise of similar duration. The spectral content of subsource time histories is adjusted so that the summary far-field signal follows certain predetermined spectral scaling law. The results of simulation depend on random seeds, and on particular values of such parameters as: stress drop; average and dispersion parameter for rupture velocity; rupture nucleation point; slip zone width/rise time, wavenumber-spectrum parameter defining final slip function; the degrees of non-Gaussianity for random slip rate in time, and for random final slip in space, and more. To calculate ground motion at a site, Green functions are calculated for each subsource-site pair, then convolved with subsource time functions and at last summed over subsources. The original Green function calculator for layered weakly inelastic medium is of discrete wavenumber kind, with no intrinsic limitations with respect to layer thickness or bandwidth. The simulation package can generate example motions, or used to study uncertainties of the predicted motion. As a test, realistic analogues of recorded motions in the epicentral zone of the 1994 Northridge, California earthquake were synthesized, and related uncertainties were

  12. Earthquake Monitoring: SeisComp3 at the Swiss National Seismic Network

    Science.gov (United States)

    Clinton, J. F.; Diehl, T.; Cauzzi, C.; Kaestli, P.

    2011-12-01

    The Swiss Seismological Service (SED) has an ongoing responsibility to improve the seismicity monitoring capability for Switzerland. This is a crucial issue for a country with low background seismicity but where a large M6+ earthquake is expected in the next decades. With over 30 stations with spacing of ~25km, the SED operates one of the densest broadband networks in the world, which is complimented by ~ 50 realtime strong motion stations. The strong motion network is expected to grow with an additional ~80 stations over the next few years. Furthermore, the backbone of the network is complemented by broadband data from surrounding countries and temporary sub-networks for local monitoring of microseismicity (e.g. at geothermal sites). The variety of seismic monitoring responsibilities as well as the anticipated densifications of our network demands highly flexible processing software. We are transitioning all software to the SeisComP3 (SC3) framework. SC3 is a fully featured automated real-time earthquake monitoring software developed by GeoForschungZentrum Potsdam in collaboration with commercial partner, gempa GmbH. It is in its core open source, and becoming a community standard software for earthquake detection and waveform processing for regional and global networks across the globe. SC3 was originally developed for regional and global rapid monitoring of potentially tsunamagenic earthquakes. In order to fulfill the requirements of a local network recording moderate seismicity, SED has tuned configurations and added several modules. In this contribution, we present our SC3 implementation strategy, focusing on the detection and identification of seismicity on different scales. We operate several parallel processing "pipelines" to detect and locate local, regional and global seismicity. Additional pipelines with lower detection thresholds can be defined to monitor seismicity within dense subnets of the network. To be consistent with existing processing

  13. Motorway Network Simulation Using Bluetooth Data

    Directory of Open Access Journals (Sweden)

    Karakikes Ioannis

    2016-09-01

    Full Text Available This paper describes a systematic calibration process of a Vissim model, based on data derived from BT detectors. It also provides instructions how to calibrate and validate a highway network model based upon a case study and establishes an example for practitioners that are interested in designing highway networks with micro simulation tools. Within this case study, a 94,5 % proper calibration to all segments was achieved First, an overview of the systematic calibration approach that will be followed is presented. A description of the given datasets follows. Finally, model’s systematic calibration and validation based on BT data from segments under free flow conditions is thoroughly explained. The delivered calibrated Vissim model acts as a test bed, which in combination with other analysis tools can be used for potential future exploitation regarding transportation related purposes.

  14. Spatiotermporal correlations of earthquakes

    International Nuclear Information System (INIS)

    Farkas, J.; Kun, F.

    2007-01-01

    Complete text of publication follows. An earthquake is the result of a sudden release of energy in the Earth's crust that creates seismic waves. At the present technological level, earthquakes of magnitude larger than three can be recorded all over the world. In spite of the apparent randomness of earthquake occurrence, long term measurements have revealed interesting scaling laws of earthquake characteristics: the rate of aftershocks following major earthquakes has a power law decay (Omori law); the magnitude distribution of earthquakes exhibits a power law behavior (Gutenberg-Richter law), furthermore, it has recently been pointed out that epicenters form fractal networks in fault zones (Kagan law). The theoretical explanation of earthquakes is based on plate tectonics: the earth's crust has been broken into plates which slowly move under the action of the flowing magma. Neighboring plates touch each other along ridges (fault zones) where a large amount of energy is stored in deformation. Earthquakes occur when the stored energy exceeds a material dependent threshold value and gets released in a sudden jump of the plate. The Burridge-Knopoff (BK) model of earthquakes represents earth's crust as a coupled system of driven oscillators where nonlinearity occurs through a stick-slip frictional instability. Laboratory experiments have revealed that under a high pressure the friction of rock interfaces exhibits a weakening with increasing velocity. In the present project we extend recent theoretical studies of the BK model by taking into account a realistic velocity weakening friction force between tectonic plates. Varying the strength of weakening a broad spectrum of interesting phenomena is obtained: the model reproduces the Omori and Gutenberg-Richter laws of earthquakes, furthermore, it provides information on the correlation of earthquake sequences. We showed by computer simulations that the spatial and temporal correlations of consecutive earthquakes are very

  15. Mobile-ip Aeronautical Network Simulation Study

    Science.gov (United States)

    Ivancic, William D.; Tran, Diepchi T.

    2001-01-01

    NASA is interested in applying mobile Internet protocol (mobile-ip) technologies to its space and aeronautics programs. In particular, mobile-ip will play a major role in the Advanced Aeronautic Transportation Technology (AATT), the Weather Information Communication (WINCOMM), and the Small Aircraft Transportation System (SATS) aeronautics programs. This report presents the results of a simulation study of mobile-ip for an aeronautical network. The study was performed to determine the performance of the transmission control protocol (TCP) in a mobile-ip environment and to gain an understanding of how long delays, handoffs, and noisy channels affect mobile-ip performance.

  16. Modeling Obstruction and Restoration of Urban Commutation Networks in the Wake of a Devastating Earthquake in Tokyo

    Directory of Open Access Journals (Sweden)

    Toshihiro Osaragi

    2015-07-01

    Full Text Available In the aftermath of a devastating earthquake, public transportation is presumed paralyzed and thus unavailable; large numbers of people are expected to experience difficulty in commuting. In recent years, implementation of district continuity plans (DCPs and business continuity plans (BCPs has become a major concern for local governments and private firms, respectively. In this paper, we propose a pair of simulation models seeking to examine business commutation networks in terms of their possible obstruction and eventual restoration. The first of these model commuting intentions by analyzing individual daily commutes. The second offers a mobility model of commuters’ physical endurance for travel alternatives on foot or by bicycle. Next, we proceed to gauge the number of commuters likely to experience difficulty and adjudge their spatial distribution while taking into account such attributes as gender and employment. Lastly, we attempt to assess rates and patterns in the reduction of commutation constraints based on simulations that assume a restoration of rail infrastructure or its equivalent.

  17. Identification and simulation of strong earthquake ground motion by using pattern recognition technique

    International Nuclear Information System (INIS)

    Suzuki, K.

    1981-01-01

    This report deals with a schematic investigation concerning an identification of nonstationary characteristics of strong earthquake acceleration motions and those simulation technique for practical use. Pattern recognition technique is introduced in order to identify time and frequency dependent ground motion's characteristics. First the running power spectrum density (RPSD) function is estimated by dividing the whole earthquake duration into certain 'stationary' segments. This RPSD can be described as 2-dimensional pattern image onto time-frequency domain. Second thus obtained RPSD patterns are classified into several representative groups based on (1) number of dominant peaks, (2) peak shape and (3) spacial relation between the most intensive peak and the second one. Then RPSD pattern corresponding to a specific group is artificially simulated by using 'peak function' which determines evolutionary feature for an arbitrary point in time-frequency plane. Using this function 8 typical artificial standard RPSD patterns are finally proposed. Identification can be performed by Complex Threshold Method which is generally used in the field of radio graphic technology. (orig./WL)

  18. 3D dynamic rupture simulation and local tomography studies following the 2010 Haiti earthquake

    Science.gov (United States)

    Douilly, Roby

    The 2010 M7.0 Haiti earthquake was the first major earthquake in southern Haiti in 250 years. As this event could represent the beginning of a new period of active seismicity in the region, and in consideration of how vulnerable the population is to earthquake damage, it is important to understand the nature of this event and how it has influenced seismic hazards in the region. Most significantly, the 2010 earthquake occurred on the secondary Leogâne thrust fault (two fault segments), not the Enriquillo Fault, the major strike-slip fault in the region, despite it being only a few kilometers away. We first use a finite element model to simulate rupture along the Leogâne fault. We varied friction and background stress to investigate the conditions that best explain observed surface deformations and why the rupture did not to jump to the nearby Enriquillo fault. Our model successfully replicated rupture propagation along the two segments of the Leogâne fault, and indicated that a significant stress increase occurred on the top and to the west of the Enriquillo fault. We also investigated the potential ground shaking level in this region if a rupture similar to the Mw 7.0 2010 Haiti earthquake were to occur on the Enriquillo fault. We used a finite element method and assumptions on regional stress to simulate low frequency dynamic rupture propagation for the segment of the Enriquillo fault closer to the capital. The high-frequency ground motion components were calculated using the specific barrier model, and the hybrid synthetics were obtained by combining the low-frequencies ( 1Hz) from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. The average horizontal peak ground acceleration, computed at several sites of interest through Port-au-Prince (the capital), has a value of 0.35g. Finally, we investigated the 3D local tomography of this region. We considered 897 high-quality records from the earthquake catalog as recorded by

  19. Simulation of strong ground motion parameters of the 1 June 2013 Gulf of Suez earthquake, Egypt

    Science.gov (United States)

    Toni, Mostafa

    2017-06-01

    This article aims to simulate the ground motion parameters of the moderate magnitude (ML 5.1) June 1, 2013 Gulf of Suez earthquake, which represents the largest instrumental earthquake to be recorded in the middle part of the Gulf of Suez up to now. This event was felt in all cities located on both sides of the Gulf of Suez, with minor damage to property near the epicenter; however, no casualties were observed. The stochastic technique with the site-dependent spectral model is used to simulate the strong ground motion parameters of this earthquake in the cities located at the western side of the Gulf of Suez and north Red Sea namely: Suez, Ain Sokhna, Zafarana, Ras Gharib, and Hurghada. The presence of many tourist resorts and the increase in land use planning in the considered cities represent the motivation of the current study. The simulated parameters comprise the Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), and Peak Ground Displacement (PGD), in addition to Pseudo Spectral Acceleration (PSA). The model developed for ground motion simulation is validated by using the recordings of three accelerographs installed around the epicenter of the investigated earthquake. Depending on the site effect that has been determined in the investigated areas by using geotechnical data (e.g., shear wave velocities and microtremor recordings), the investigated areas are classified into two zones (A and B). Zone A is characterized by higher site amplification than Zone B. The ground motion parameters are simulated at each zone in the considered areas. The results reveal that the highest values of PGA, PGV, and PGD are observed at Ras Gharib city (epicentral distance ∼ 11 km) as 67 cm/s2, 2.53 cm/s, and 0.45 cm respectively for Zone A, and as 26.5 cm/s2, 1.0 cm/s, and 0.2 cm respectively for Zone B, while the lowest values of PGA, PGV, and PGD are observed at Suez city (epicentral distance ∼ 190 km) as 3.0 cm/s2, 0.2 cm/s, and 0.05 cm/s respectively for Zone A

  20. The Effects of Off-Fault Plasticity in Earthquake Cycle Simulations

    Science.gov (United States)

    Erickson, B. A.; Dunham, E. M.

    2012-12-01

    Field observations of damage zones around faults reveal regions of fractured or pulverized rocks on the order of several hundred meters surrounding a highly damaged fault core. It has been postulated that these damage zones are the result of the fracturing and healing within the fault zone due to many years of seismogenic cycling. In dynamic rupture simulations which account for inelastic deformation, the influence of plasticity has been shown to significantly alter rupture propagation speed and the residual stress field left near the fault. Plastic strain near the Earth's surface has also been shown to account for a fraction of the inferred shallow slip deficit. We are developing an efficient numerical method to simulate full earthquake cycles of multiple events with rate-and-state friction laws and off-fault plasticity. Although the initial stress state prior to an earthquake is not well understood, our method evolves the system through the interseismic period, therefore generating self-consistent initial conditions prior to rupture. Large time steps can be taken during the interseismic period while much smaller time steps are required to fully resolve quasi-dynamic rupture where we use the the radiation damping approximation to the inertial term for computational efficiency. So far our cycle simulations have been done assuming a linear elastic medium. We have concurrently begun developing methods for allowing plastic deformation in our cycle simulations where the stress is constrained by a Drucker-Prager yield criterion. The idea is to simulate multiple events which allow for inelastic response, in order to understand how plasticity alters the rupture process during each event in the cycle. We will use this model to see what fraction of coseismic strain is accommodated by inelastic deformation throughout the entire earthquake cycle from the interseismic period through the mainshock. Modeling earthquake cycles with plasticity will also allow us to study how an

  1. Convolutional Neural Networks for Earthquake Detection and Location of Seismicity in Central Oklahoma

    Science.gov (United States)

    Perol, T.; Gharbi, M.; Denolle, M.

    2016-12-01

    Induced seismicity is characterized by localized activity of small-scale and moderate-magnitude earthquakes. Poor instrumental coverage limits the accuracy of traditional techniques for earthquake detection and localization. Currently, the most effective approach to detect new (and smaller) events is the so-called template matching method. It matches events' waveforms against previously-seen waveform templates. This restricts the search to events that are collocated with the cataloged events. We propose an alternative method, which we called ConvNetQuake, that leverages recent advances in convolutional neural networks for pattern recognition and classification. Once trained on a dataset of 3-component seismograms, ConvNetQuake learns a bank of finite impulse response filters that can discriminate seismic events against noise. First, we compare our algorithm to template matching on synthetic data. We generate synthetic waveforms by adding randomly scaled copies of a single 3-component template at random temporal offsets over a Gaussian noise floor. While the accuracy of ConvNetQuake is slightly lower than that of template matching, it has the advantage of a more compact non-linear representation that can detect new events that were not in the training set. Second, we cluster the Guthrie earthquakes using a Multivariate Gaussian Mixture Model (MGMM) based on the Oklahoma Geological Survey (OGS) catalog and sample a few events from each cluster. We proceed as before and construct synthetic seismograms with the additional information of the events' location. We now train our algorithm to discriminate events from the noise and, jointly, to estimate the probability than the event belongs to a particular cluster. Using the MGMM, we produce maps of the continuous probability distribution of event location. Finally, we apply ConvNetQuake to the Guthrie sequence by training it on data from February 15th, 2014 to August, 31th 2014 using the known cataloged seismicity provided

  2. Numerical simulation of co-seismic deformation of 2011 Japan Mw9. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Zhang Keliang

    2011-08-01

    Full Text Available Co-seismic displacements associated with the Mw9. 0 earthquake on March 11, 2011 in Japan are numerically simulated on the basis of a finite-fault dislocation model with PSGRN/PSCMP software. Compared with the inland GPS observation, 90% of the computed eastward, northward and vertical displacements have residuals less than 0.10 m, suggesting that the simulated results can be, to certain extent, used to demonstrate the co-seismic deformation in the near field. In this model, the maximum eastward displacement increases from 6 m along the coast to 30 m near the epicenter, where the maximum southward displacement is 13 m. The three-dimensional display shows that the vertical displacement reaches a maximum uplift of 14.3 m, which is comparable to the tsunami height in the near-trench region. The maximum subsidence is 5.3 m.

  3. The ShakeOut earthquake scenario: Verification of three simulation sets

    Science.gov (United States)

    Bielak, J.; Graves, R.W.; Olsen, K.B.; Taborda, R.; Ramirez-Guzman, L.; Day, S.M.; Ely, G.P.; Roten, D.; Jordan, T.H.; Maechling, P.J.; Urbanic, J.; Cui, Y.; Juve, G.

    2010-01-01

    This paper presents a verification of three simulations of the ShakeOut scenario, an Mw 7.8 earthquake on a portion of the San Andreas fault in southern California, conducted by three different groups at the Southern California Earthquake Center using the SCEC Community Velocity Model for this region. We conducted two simulations using the finite difference method, and one by the finite element method, and performed qualitative and quantitative comparisons between the corresponding results. The results are in good agreement with each other; only small differences occur both in amplitude and phase between the various synthetics at ten observation points located near and away from the fault-as far as 150 km away from the fault. Using an available goodness-of-fit criterion all the comparisons scored above 8, with most above 9.2. This score would be regarded as excellent if the measurements were between recorded and synthetic seismograms. We also report results of comparisons based on time-frequency misfit criteria. Results from these two criteria can be used for calibrating the two methods for comparing seismograms. In those cases in which noticeable discrepancies occurred between the seismograms generated by the three groups, we found that they were the product of inherent characteristics of the various numerical methods used and their implementations. In particular, we found that the major source of discrepancy lies in the difference between mesh and grid representations of the same material model. Overall, however, even the largest differences in the synthetic seismograms are small. Thus, given the complexity of the simulations used in this verification, it appears that the three schemes are consistent, reliable and sufficiently accurate and robust for use in future large-scale simulations. ?? 2009 The Authors Journal compilation ?? 2009 RAS.

  4. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    Czech Academy of Sciences Publication Activity Database

    Fojtíková, Lucia; Kristeková, M.; Málek, Jiří; Sokos, E.; Csicsay, K.; Zahradník, J.

    2016-01-01

    Roč. 20, č. 1 (2016), 93-106 ISSN 1383-4649 R&D Projects: GA ČR GAP210/12/2336 Institutional support: RVO:67985891 Keywords : Focal-mechanism uncertainty * Little Carpathians * Relative location uncertainty * Seismic network * Uncertainty mapping * Waveform inversion * Weak earthquakes Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.089, year: 2016

  5. Fast 3D seismic wave simulations of 24 August 2016 Mw 6.0 central Italy earthquake for visual communication

    Directory of Open Access Journals (Sweden)

    Emanuele Casarotti

    2016-12-01

    Full Text Available We present here the first application of the fast reacting framework for 3D simulations of seismic wave propagation generated by earthquakes in the Italian region with magnitude Mw 5. The driven motivation is to offer a visualization of the natural phenomenon to the general public but also to provide preliminary modeling to expert and civil protection operators. We report here a description of this framework during the emergency of 24 August 2016 Mw 6.0 central Italy Earthquake, a discussion on the accuracy of the simulation for this seismic event and a preliminary critical analysis of the visualization structure and of the reaction of the public.

  6. 3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty

    Science.gov (United States)

    Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.

    2017-01-01

    Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in

  7. Study on tsunami due to offshore earthquakes for Korea coast. Literature survey and numerical simulation on earthquake and tsunami in the Japan Sea and the East China Sea

    International Nuclear Information System (INIS)

    Matsuyama, Masafumi; Aoyagi, Yasuhira; Inoue, Daiei; Choi, Weon-Hack; Kang, Keum-Seok

    2008-01-01

    In Korea, there has been a concern on tsumami risks for the Nuclear Power Plants since the 1983 Nihonkai-Chubu earthquake tsunami. The maximum run-up height reached 4 m to north of the Ulchin nuclear power plant site. The east coast of Korea was also attacked by a few meters high tsunami generated by the 1993 Hokkaido Nansei-Oki earthquake. Both source areas of them were in the areas western off Hokkaido to the eastern margin of the Japan Sea, which remains another tsunami potential. Therefore it is necessary to study tsunami risks for coast of Korea by means of geological investigation and numerical simulation. Historical records of earthquake and tsunami in the Japan Sea were re-compiled to evaluate tsunami potential. A database of marine active faults in the Japan Sea was compiled to decide a regional potential of tsunami. Many developed reverse faults are found in the areas western off Hokkaido to the eastern margin of the Japan Sea. The authors have found no historical earthquake in the East China Sea which caused tunami observed at coast of Korea. Therefore five fault models were determined on the basis of the analysis results of historical records and recent research results of fault parameter and tunami. Tsunami heights were estimated by numerical simulation of nonlinear dispersion wave theory. The results of the simulations indicate that the tsunami heights in these cases are less than 0.25 m along the coast of Korea, and the tsunami risk by these assumed faults does not lead to severe impact. It is concluded that tsunami occurred in the areas western off Hokkaido to the eastern margin of the Japan Sea leads the most significant impact to Korea consequently. (author)

  8. OBJECT-ORIENTED ANALYSIS OF SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS FOR POST-EARTHQUAKE BUILDINGS CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    N. Khodaverdi zahraee

    2017-09-01

    Full Text Available Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.

  9. Object-Oriented Analysis of Satellite Images Using Artificial Neural Networks for Post-Earthquake Buildings Change Detection

    Science.gov (United States)

    Khodaverdi zahraee, N.; Rastiveis, H.

    2017-09-01

    Earthquake is one of the most divesting natural events that threaten human life during history. After the earthquake, having information about the damaged area, the amount and type of damage can be a great help in the relief and reconstruction for disaster managers. It is very important that these measures should be taken immediately after the earthquake because any negligence could be more criminal losses. The purpose of this paper is to propose and implement an automatic approach for mapping destructed buildings after an earthquake using pre- and post-event high resolution satellite images. In the proposed method after preprocessing, segmentation of both images is performed using multi-resolution segmentation technique. Then, the segmentation results are intersected with ArcGIS to obtain equal image objects on both images. After that, appropriate textural features, which make a better difference between changed or unchanged areas, are calculated for all the image objects. Finally, subtracting the extracted textural features from pre- and post-event images, obtained values are applied as an input feature vector in an artificial neural network for classifying the area into two classes of changed and unchanged areas. The proposed method was evaluated using WorldView2 satellite images, acquired before and after the 2010 Haiti earthquake. The reported overall accuracy of 93% proved the ability of the proposed method for post-earthquake buildings change detection.

  10. Universal law for waiting internal time in seismicity and its implication to earthquake network

    Science.gov (United States)

    Abe, Sumiyoshi; Suzuki, Norikazu

    2012-02-01

    In their paper (Europhys. Lett., 71 (2005) 1036), Carbone, Sorriso-Valvo, Harabaglia and Guerra showed that the "unified scaling law" for conventional waiting times of earthquakes claimed by Bak et al. (Phys. Rev. Lett., 88 (2002) 178501) is actually not universal. Here, instead of the conventional time, the concept of the internal time termed the event time is considered for seismicity. It is shown that, in contrast to the conventional waiting time, the waiting event time obeys a power law. This implies the existence of temporal long-range correlations in terms of the event time with no sharp decay of the crossover type. The discovered power-law waiting event-time distribution turns out to be universal in the sense that it takes the same form for seismicities in California, Japan and Iran. In particular, the parameters contained in the distribution take the common values in all these geographical regions. An implication of this result to the procedure of constructing earthquake networks is discussed.

  11. Detecting Earthquakes over a Seismic Network using Single-Station Similarity Measures

    Science.gov (United States)

    Bergen, Karianne J.; Beroza, Gregory C.

    2018-03-01

    New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected move-out. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to two weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalog (including 95% of the catalog events), and less than 1% of these candidate events are false detections.

  12. Simulation of artificial earthquake records compatible with site specific response spectra using time series analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Fadavi Amiri

    2017-11-01

    Full Text Available Time history analysis of infrastructures like dams, bridges and nuclear power plants is one of the fundamental parts of their design process. But there are not sufficient and suitable site specific earthquake records to do such time history analysis; therefore, generation of artificial accelerograms is required for conducting research works in this area.  Using time series analysis, wavelet transforms, artificial neural networks and genetic algorithm, a new method is introduced to produce artificial accelerograms compatible with response spectra for the specified site condition. In the proposed method, first, some recorded accelerograms are selected based on the soil condition at the recording station. The soils in these stations are divided into two groups of soil and rock according to their measured shear wave velocity. These accelerograms are then analyzed using wavelet transform. Next, artificial neural networks ability to produce reverse signal from response spectra is used to produce wavelet coefficients. Furthermore, a genetic algorithm is employed to optimize the network weight and bias matrices by searching in a wide range of values and prevent neural network convergence on local optima. At the end site specific accelerograms are produced. In this paper a number of recorded accelerograms in Iran are employed to test the neural network performances and to demonstrate the effectiveness of the method. It is shown that using synthetic time series analysis, genetic algorithm, neural network and wavelet transform will increase the capabilities of the algorithm and improve its speed and accuracy in generating accelerograms compatible with site specific response spectra for different site conditions.

  13. Harris Simulator Design Description for Adaptive Distributed Network Management System

    National Research Council Canada - National Science Library

    1986-01-01

    ... (ADNMS), Naval Research Laboratory (NRL). The document describes the Harris Simulator used to support the development and test of a first generation network management algorithm for a typical SDI communications network...

  14. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  15. QuakeUp: An advanced tool for a network-based Earthquake Early Warning system

    Science.gov (United States)

    Zollo, Aldo; Colombelli, Simona; Caruso, Alessandro; Elia, Luca; Brondi, Piero; Emolo, Antonio; Festa, Gaetano; Martino, Claudio; Picozzi, Matteo

    2017-04-01

    The currently developed and operational Earthquake Early warning, regional systems ground on the assumption of a point-like earthquake source model and 1-D ground motion prediction equations to estimate the earthquake impact. Here we propose a new network-based method which allows for issuing an alert based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The platform includes the most advanced techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The new software platform (QuakeUp) is under development at the Seismological Laboratory (RISSC-Lab) of the Department of Physics at the University of Naples Federico II, in collaboration with the academic spin-off company RISS s.r.l., recently gemmated by the research group. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. The signal quality is preliminary assessed by checking the signal-to-noise ratio both in acceleration, velocity and displacement and through dedicated filtering algorithms. For stations providing high quality data, the characteristic P-wave period (τ_c) and the P-wave displacement, velocity and acceleration amplitudes (P_d, Pv and P_a) are jointly measured on a progressively expanded P-wave time window. The evolutionary measurements of the early P-wave amplitude and characteristic period at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (I_MM) and by mapping the measured and

  16. Finite Element Simulations of Kaikoura, NZ Earthquake using DInSAR and High-Resolution DSMs

    Science.gov (United States)

    Barba, M.; Willis, M. J.; Tiampo, K. F.; Glasscoe, M. T.; Clark, M. K.; Zekkos, D.; Stahl, T. A.; Massey, C. I.

    2017-12-01

    Three-dimensional displacements from the Kaikoura, NZ, earthquake in November 2016 are imaged here using Differential Interferometric Synthetic Aperture Radar (DInSAR) and high-resolution Digital Surface Model (DSM) differencing and optical pixel tracking. Full-resolution co- and post-seismic interferograms of Sentinel-1A/B images are constructed using the JPL ISCE software. The OSU SETSM software is used to produce repeat 0.5 m posting DSMs from commercial satellite imagery, which are supplemented with UAV derived DSMs over the Kaikoura fault rupture on the eastern South Island, NZ. DInSAR provides long-wavelength motions while DSM differencing and optical pixel tracking provides both horizontal and vertical near fault motions, improving the modeling of shallow rupture dynamics. JPL GeoFEST software is used to perform finite element modeling of the fault segments and slip distributions and, in turn, the associated asperity distribution. The asperity profile is then used to simulate event rupture, the spatial distribution of stress drop, and the associated stress changes. Finite element modeling of slope stability is accomplished using the ultra high-resolution UAV derived DSMs to examine the evolution of post-earthquake topography, landslide dynamics and volumes. Results include new insights into shallow dynamics of fault slip and partitioning, estimates of stress change, and improved understanding of its relationship with the associated seismicity, deformation, and triggered cascading hazards.

  17. A Network Simulation Tool for Task Scheduling

    Directory of Open Access Journals (Sweden)

    Ondřej Votava

    2012-01-01

    Full Text Available Distributed computing may be looked at from many points of view. Task scheduling is the viewpoint, where a distributed application can be described as a Directed Acyclic Graph and every node of the graph is executed independently. There are, however, data dependencies and the nodes have to be executed in a specified order. Hence the parallelism of the execution is limited. The scheduling problem is difficult and therefore heuristics are used. However, many inaccuracies are caused by the model used for the system, in which the heuristics are being tested. In this paper we present a tool for simulating the execution of the distributed application on a “real” computer network, and try to tell how the executionis influenced compared to the model.

  18. Ground-Motion Simulations of the 2008 Ms8.0 Wenchuan, China, Earthquake Using Empirical Green's Function Method

    Science.gov (United States)

    Zhang, W.; Zhang, Y.; Yao, X.

    2010-12-01

    On May 12, 2008, a huge earthquake with magnitude Ms8.0 occurred in the Wenhuan, Sichuan Province of China. This event was the most devastating earthquake in the mainland of China since the 1976 M7.8 Tangshan earthquake. It resulted in tremendous losses of life and property. There were about 90,000 persons killed. Due to occur in the mountainous area, this great earthquake and the following thousands aftershocks also caused many other geological disasters, such as landslide, mud-rock flow and “quake lakes” which formed by landslide-induced reservoirs. This earthquake occurred along the Longmenshan fault, as the result of motion on a northeast striking reverse fault or thrust fault on the northwestern margin of the Sichuan Basin. The earthquake's epicenter and focal-mechanism are consistent with it having occurred as the result of movement on the Longmenshan fault or a tectonically related fault. The earthquake reflects tectonic stresses resulting from the convergence of crustal material slowly moving from the high Tibetan Plateau, to the west, against strong crust underlying the Sichuan Basin and southeastern China. In this study, we simulate the near-field strong ground motions of this great event based on the empirical Green’s function method (EGF). Referring to the published inversion source models, at first, we assume that there are three asperities on the rupture area and choose three different small events as the EGFs. Then, we identify the parameters of the source model using a genetic algorithm (GA). We calculate the synthetic waveforms based on the obtained source model and compare with the observed records. Our result shows that for most of the synthetic waveforms agree very well with the observed ones. The result proves the validity and the stability of the method. Finally, we forward the near-field strong ground motions near the source region and try to explain the damage distribution caused by the great earthquake.

  19. Seismic and Tsunami Waveform Simulation based on Dynamic Rupture Scenarios: Anticipated Nankai-Tonankai Earthquakes, Southwest Japan

    Science.gov (United States)

    Saito, T.; Fukuyama, E.; Kim, S.

    2016-12-01

    Rupture scenarios of anticipated huge earthquakes based on earthquake physics and observational records should be useful for the hazard evaluation of future disastrous earthquakes. Hok et al. (2011, JGR) proposed possible dynamic rupture scenarios of anticipated Nankai-Tonankai huge earthquakes, southwest Japan using estimated slip deficit distribution and an appropriate fault friction law. These scenarios are quite useful to study the details of the wave propagation as well as potential earthquake and tsunami hazard (e.g. Kim et al. 2016, EPS). The objective in this study is to synthesize seismic and tsunami waveforms of the anticipated huge earthquakes, which could be useful for the future hazard assessment. We propose a method of synthesizing the waveforms, in particular, in the region of offshore focal area where seismic waves, ocean acoustic waves, and tsunamis simultaneously exist, which makes the wavefield very complicated. We calculated the seismic and tsunami waveforms caused by a dynamic rupture of huge earthquakes (Mw 8.5) southwestern Japan. There are two kinds of tsunami observations: ocean bottom pressure gauges detect tsunami as pressure change at the sea bottom and GPS tsunami gauges measure tsunami as vertical displacement at the sea surface. Our simulation results indicated that both tsunami records are significantly contaminated by seismic waves in a few minutes after the earthquake occurrence. The tsunami and seismic waves have different excitation mechanisms: seismic wave excitation strongly depends on the time scale of the rupture (moment rate), while tsunami excitation is determined by the static parameters (fault geometry and seismic moment). Therefore, for a reliable tsunami prediction, it is important to analyze observed tsunami records excluding the seismic waves that behave like tsunami near the source area.

  20. BioNessie - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X.; Jiang, J.; Ajayi, O.; Gu, X.; Gilbert, D.; Sinnott, R.O.

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scale simulations.

  1. Information diversity in structure and dynamics of simulated neuronal networks.

    Science.gov (United States)

    Mäki-Marttunen, Tuomo; Aćimović, Jugoslava; Nykter, Matti; Kesseli, Juha; Ruohonen, Keijo; Yli-Harja, Olli; Linne, Marja-Leena

    2011-01-01

    Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  2. Earthquake-induced landslide-susceptibility mapping using an artificial neural network

    Directory of Open Access Journals (Sweden)

    S. Lee

    2006-01-01

    Full Text Available The purpose of this study was to apply and verify landslide-susceptibility analysis techniques using an artificial neural network and a Geographic Information System (GIS applied to Baguio City, Philippines. The 16 July 1990 earthquake-induced landslides were studied. Landslide locations were identified from interpretation of aerial photographs and field survey, and a spatial database was constructed from topographic maps, geology, land cover and terrain mapping units. Factors that influence landslide occurrence, such as slope, aspect, curvature and distance from drainage were calculated from the topographic database. Lithology and distance from faults were derived from the geology database. Land cover was identified from the topographic database. Terrain map units were interpreted from aerial photographs. These factors were used with an artificial neural network to analyze landslide susceptibility. Each factor weight was determined by a back-propagation exercise. Landslide-susceptibility indices were calculated using the back-propagation weights, and susceptibility maps were constructed from GIS data. The susceptibility map was compared with known landslide locations and verified. The demonstrated prediction accuracy was 93.20%.

  3. Japan Data Exchange Network JDXnet and Cloud-type Data Relay Server for Earthquake Observation Data

    Science.gov (United States)

    Takano, K.; Urabe, T.; Tsuruoka, H.; Nakagawa, S.

    2015-12-01

    In Japan, high-sensitive seismic observation and broad-band seismic observation are carried out by several organization such as Japan Meteorological Agency (JMA) , National Research Institute for Earth Science and Disaster Prevention (NIED), nine National Universities, Japan Agency for Marine-Earth Science and Technology (JAMSTEC) , etc. The total number of the observation station is about 1400 points. The total volume of the seismic waveform data collected from all these observation station is about 1MByte for 1 second (about 8 to 10Mbps) by using the WIN system(Urabe 1991). JDXnet is the Japan Data eXchange network for earthquake observation data. JDXnet was started from 2007 by cooperation of the researchers of each organization. All the seismic waveform data are available at the all organizations in real-time. The core of JDXnet is the broadcast type real-time data exchange by using the nationwide L2-VPN service offered in JGN-X of NICT and SINET4 of NII. Before the Tohoku earthquake, the nine national universities had collected seismic data to each data center and then exchanged with other universities and institutions by JDXnet. However, in this case, if the center of the university was stopped, all data of the university could not use even though there are some alive observation stations. Because of this problem, we have prepared the data relay server in the data center of SINET4 ie the cloud center. This data relay server collects data directly from the observation stations of the universities and delivers data to all universities and institutions by JDXnet. By using the relay server on cloud center, even if some universities are affected by a large disaster, it is eliminated that the data of the living station is lost. If the researchers set up seismometers and send data to the relay server, then data are available to all researchers. This mechanism promotes the joint use of the seismometers and joint research activities in nationwide researchers.

  4. Simulation Of Networking Protocols On Software Emulated Network Stack

    Directory of Open Access Journals (Sweden)

    Hrushikesh Nimkar

    2015-08-01

    Full Text Available With the increasing number and complexity of network based applications the need to easy configuration development and integration of network applications has taken a high precedence. Trivial activities such as configuration can be carried out efficiently if network services are software based rather than hardware based. Project aims at enabling the network engineers to easily include network functionalities into hisher configuration and define hisher own network stack without using the kernel network stack. Having thought of this we have implemented two functionalities UPNP and MDNS. The multicast Domain Name System MDNS resolves host names to IP addresses within small ad-hoc networks and without having need of special DNS server and its configuration. MDNS application provides every host with functionality to register itself to the router make a multicast DNS request and its resolution. To make adding network devices and networked programs to a network as easy as it is to plug in a piece of hardware into a PC we make use of UPnP. The devices and programs find out about the network setup and other networked devices and programs through discovery and advertisements of services and configure themselves accordingly. UPNP application provides every host with functionality of discovering services of other hosts and serving requests on demand. To implement these applications we have used snabbswitch framework which an open source virtualized ethernet networking stack.

  5. Websim3d: A Web-based System for Generation, Storage and Dissemination of Earthquake Ground Motion Simulations.

    Science.gov (United States)

    Olsen, K. B.

    2003-12-01

    Synthetic time histories from large-scale 3D ground motion simulations generally constitute large 'data' sets which typically require 100's of Mbytes or Gbytes of storage capacity. For the same reason, getting access to a researchers simulation output, for example for an earthquake engineer to perform site analysis, or a seismologist to perform seismic hazard analysis, can be a tedious procedure. To circumvent this problem we have developed a web-based ``community model'' (websim3D) for the generation, storage, and dissemination of ground motion simulation results. Websim3D allows user-friendly and fast access to view and download such simulation results for an earthquake-prone area. The user selects an earthquake scenario from a map of the region, which brings up a map of the area where simulation data is available. Now, by clicking on an arbitrary site location, synthetic seismograms and/or soil parameters for the site can be displayed at fixed or variable scaling and/or downloaded. Websim3D relies on PHP scripts for the dynamic plots of synthetic seismograms and soil profiles. Although not limited to a specific area, we illustrate the community model for simulation results from the Los Angeles basin, Wellington (New Zealand), and Mexico.

  6. Accelerator and feedback control simulation using neural networks

    International Nuclear Information System (INIS)

    Nguyen, D.; Lee, M.; Sass, R.; Shoaee, H.

    1991-05-01

    Unlike present constant model feedback system, neural networks can adapt as the dynamics of the process changes with time. Using a process model, the ''Accelerator'' network is first trained to simulate the dynamics of the beam for a given beam line. This ''Accelerator'' network is then used to train a second ''Controller'' network which performs the control function. In simulation, the networks are used to adjust corrector magnetics to control the launch angle and position of the beam to keep it on the desired trajectory when the incoming beam is perturbed. 4 refs., 3 figs

  7. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  8. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  9. The Community Seismic Network and Quake-Catcher Network: Monitoring building response to earthquakes through community instrumentation

    Science.gov (United States)

    Cheng, M.; Kohler, M. D.; Heaton, T. H.; Clayton, R. W.; Chandy, M.; Cochran, E.; Lawrence, J. F.

    2013-12-01

    The Community Seismic Network (CSN) and Quake-Catcher Network (QCN) are dense networks of low-cost ($50) accelerometers that are deployed by community volunteers in their homes in California. In addition, many accelerometers are installed in public spaces associated with civic services, publicly-operated utilities, university campuses, and high-rise buildings. Both CSN and QCN consist of observation-based structural monitoring which is carried out using records from one to tens of stations in a single building. We have deployed about 150 accelerometers in a number of buildings ranging between five and 23 stories in the Los Angeles region. In addition to a USB-connected device which connects to the host's computer, we have developed a stand-alone sensor-plug-computer device that directly connects to the internet via Ethernet or WiFi. In the case of CSN, the sensors report data to the Google App Engine cloud computing service consisting of data centers geographically distributed across the continent. This robust infrastructure provides parallelism and redundancy during times of disaster that could affect hardware. The QCN sensors, however, are connected to netbooks with continuous data streaming in real-time via the distributed computing Berkeley Open Infrastructure for Network Computing software program to a server at Stanford University. In both networks, continuous and triggered data streams use a STA/LTA scheme to determine the occurrence of significant ground accelerations. Waveform data, as well as derived parameters such as peak ground acceleration, are then sent to the associated archives. Visualization models of the instrumented buildings' dynamic linear response have been constructed using Google SketchUp and MATLAB. When data are available from a limited number of accelerometers installed in high rises, the buildings are represented as simple shear beam or prismatic Timoshenko beam models with soil-structure interaction. Small-magnitude earthquake records

  10. Use of Ground Motion Simulations of a Historical Earthquake for the Assessment of Past and Future Urban Risks

    Science.gov (United States)

    Kentel, E.; Çelik, A.; karimzadeh Naghshineh, S.; Askan, A.

    2017-12-01

    Erzincan city located in the Eastern part of Turkey at the conjunction of three active faults is one of the most hazardous regions in the world. In addition to several historical events, this city has experienced one of the largest earthquakes during the last century: The 27 December 1939 (Ms=8.0) event. With limited knowledge of the tectonic structure by then, the city center was relocated to the North after the 1939 earthquake by almost 5km, indeed closer to the existing major strike slip fault. This decision coupled with poor construction technologies, led to severe damage during a later event that occurred on 13 March 1992 (Mw=6.6). The 1939 earthquake occurred in the pre-instrumental era in the region with no available local seismograms whereas the 1992 event was only recorded by 3 nearby stations. There are empirical isoseismal maps from both events indicating indirectly the spatial distribution of the damage. In this study, we focus on this region and present a multidisciplinary approach to discuss the different components of uncertainties involved in the assessment and mitigation of seismic risk in urban areas. For this initial attempt, ground motion simulation of the 1939 event is performed to obtain the anticipated ground motions and shaking intensities. Using these quantified results along with the spatial distribution of the observed damage, the relocation decision is assessed and suggestions are provided for future large earthquakes to minimize potential earthquake risks.

  11. Toward Designing a Quantum Key Distribution Network Simulation Model

    OpenAIRE

    Miralem Mehic; Peppino Fazio; Miroslav Voznak; Erik Chromy

    2016-01-01

    As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several ...

  12. Far-field tsunami of 2017 Mw 8.1 Tehuantepec, Mexico earthquake recorded by Chilean tide gauge network: Implications for tsunami warning systems

    Science.gov (United States)

    González-Carrasco, J. F.; Benavente, R. F.; Zelaya, C.; Núñez, C.; Gonzalez, G.

    2017-12-01

    The 2017 Mw 8.1, Tehuantepec earthquake generated a moderated tsunami, which was registered in near-field tide gauges network activating a tsunami threat state for Mexico issued by PTWC. In the case of Chile, the forecast of tsunami waves indicate amplitudes less than 0.3 meters above the tide level, advising an informative state of threat, without activation of evacuation procedures. Nevertheless, during sea level monitoring of network we detect wave amplitudes (> 0.3 m) indicating a possible change of threat state. Finally, NTWS maintains informative level of threat based on mathematical filtering analysis of sea level records. After 2010 Mw 8.8, Maule earthquake, the Chilean National Tsunami Warning System (NTWS) has increased its observational capabilities to improve early response. Most important operational efforts have focused on strengthening tide gauge network for national area of responsibility. Furthermore, technological initiatives as Integrated Tsunami Prediction and Warning System (SIPAT) has segmented the area of responsibility in blocks to focus early warning and evacuation procedures on most affected coastal areas, while maintaining an informative state for distant areas of near-field earthquake. In the case of far-field events, NTWS follow the recommendations proposed by Pacific Tsunami Warning Center (PTWC), including a comprehensive monitoring of sea level records, such as tide gauges and DART (Deep-Ocean Assessment and Reporting of Tsunami) buoys, to evaluate the state of tsunami threat in the area of responsibility. The main objective of this work is to analyze the first-order physical processes involved in the far-field propagation and coastal impact of tsunami, including implications for decision-making of NTWS. To explore our main question, we construct a finite-fault model of the 2017, Mw 8.1 Tehuantepec earthquake. We employ the rupture model to simulate a transoceanic tsunami modeled by Neowave2D. We generate synthetic time series at

  13. Simulation and analysis of a meshed district heating network

    International Nuclear Information System (INIS)

    Vesterlund, Mattias; Toffolo, Andrea; Dahl, Jan

    2016-01-01

    Highlights: • A method for the detailed simulation of meshed district heating network is described. • The simulation models developed in Simulink are fully modular. • The complex meshed network in Kiruna (Sweden) is used as case study. • The simulation results are validated against the experimental data. • The patterns of mass and heat flows are visualized and analysed. - Abstract: The flow distribution in a district heating network tends to become no longer obvious when system design is developed and its complexity increased. As a consequence, the network owner, often the local energy company, is in need of a simulation program to have the possibility of analysing network behaviour and expand the understanding about the operation of district heating system. In this paper, a simulation tool developed in MATLAB/Simulink is applied in order to analyse the flow distribution in the district heating network of the town of Kiruna (Sweden). The network in Kiruna has been developing since the 60s and is today a complex network with a meshed structure, i.e. it is formed by a set of loops from which secondary branches depart. The simulation tool is part of a methodology that has specifically been developed to analyse the flow pattern in such kind of networks without altering their physical structure, and it is expected to be a valuable tool for the redesign of the network in the forthcoming relocation of some of the urban districts. The results about the current network configuration show that only a few pipes in the network are exceeding the levels of heat flow recommended by pipe manufacturers. The largest drops in pressure and temperature from the heat production site to the nodes serving the main consumer areas are within 1.2 bar and 9 °C in the days of highest demand.

  14. WCDMA Mobile Radio Network Simulator with Hybrid Link Adaptation

    Directory of Open Access Journals (Sweden)

    Vladimir Wieser

    2005-01-01

    Full Text Available The main aim of this article is the description of the mobile radio network model, which is used for simulation of authentic conditions in mobile radio network and supports several link adaptation algorithms. Algorithms were designed to increase efficiency of data transmission between user equipment and base station (uplink. The most important property of the model is its ability to simulate several radio cells (base stations and their mutual interactions. The model is created on the basic principles of UMTS network and takes into account parameters of real mobile radio networks.

  15. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  16. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  17. S-net : Construction of large scale seafloor observatory network for tsunamis and earthquakes along the Japan Trench

    Science.gov (United States)

    Mochizuki, M.; Uehira, K.; Kanazawa, T.; Shiomi, K.; Kunugi, T.; Aoi, S.; Matsumoto, T.; Sekiguchi, S.; Yamamoto, N.; Takahashi, N.; Nakamura, T.; Shinohara, M.; Yamada, T.

    2017-12-01

    NIED has launched the project of constructing a seafloor observatory network for tsunamis and earthquakes after the occurrence of the 2011 Tohoku Earthquake to enhance reliability of early warnings of tsunamis and earthquakes. The observatory network was named "S-net". The S-net project has been financially supported by MEXT.The S-net consists of 150 seafloor observatories which are connected in line with submarine optical cables. The total length of submarine optical cable is about 5,500 km. The S-net covers the focal region of the 2011 Tohoku Earthquake and its vicinity regions. Each observatory equips two units of a high sensitive pressure gauges as a tsunami meter and four sets of three-component seismometers. The S-net is composed of six segment networks. Five of six segment networks had been already installed. Installation of the last segment network covering the outer rise area have been finally finished by the end of FY2016. The outer rise segment has special features like no other five segments of the S-net. Those features are deep water and long distance. Most of 25 observatories on the outer rise segment are located at the depth of deeper than 6,000m WD. Especially, three observatories are set on the seafloor of deeper than about 7.000m WD, and then the pressure gauges capable of being used even at 8,000m WD are equipped on those three observatories. Total length of the submarine cables of the outer rise segment is about two times longer than those of the other segments. The longer the cable system is, the higher voltage supply is needed, and thus the observatories on the outer rise segment have high withstanding voltage characteristics. We employ a dispersion management line of a low loss formed by combining a plurality of optical fibers for the outer rise segment cable, in order to achieve long-distance, high-speed and large-capacity data transmission Installation of the outer rise segment was finished and then full-scale operation of S-net has started

  18. Observations Using the Taipei Basin Broadband Downhole Seismic Network: The 26 December 2006, Pingtung Earthquake Doublet, Taiwan

    Directory of Open Access Journals (Sweden)

    Win-Gee Huang

    2008-01-01

    Full Text Available To monitor the fault activity in the Taipei area, a new broadband downhole seismic network comprised of three stations was established in the Taipei Basin over a period of three years, 2005 - 2007. The network geometry is a triangle with a station spacing of about 12 km covering the entire Taipei Basin. Each station has two holes of different depths containing modern instruments, including a low-gain broadband seismometer. The largest depth is 150 m. We report our first experience on the installation and operation of the broadband downhole seismic network in the Taipei Basin. Some representative records from the Pingtung earthquake doublet in December 2006 are shown here. Ground displacement during the Pingtung earthquake doublet can be recovered from the velocity records without the baseline corrections that are required for the acceleration records. Our network offers excellent data for accurate and effective characterization of seismic motion in the study area. Seismic data from this network will be shared with other research institutions in Taiwan and abroad for further research.

  19. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    Science.gov (United States)

    Yu, E.; Bhaskaran, A.; Chen, S. L.; Andrews, J. R.; Thomas, V. I.; Hauksson, E.; Clayton, R. W.

    2016-12-01

    The Southern California Earthquake Data Center (SCEDC) archives continuous and triggered data from nearly 9429 data channels from 513 Southern California Seismic Network recorded stations. The SCEDC provides public access to these earthquake parametric and waveform data through web services, its website http://scedc.caltech.edu and through client application such as STP. This poster will describe the most recent significant developments at the SCEDC. The SCEDC now provides web services to access its holdings. Event Parametric Data (FDSN Compliant): http://service.scedc.caltech.edu/fdsnws/event/1/ Station Metadata (FDSN Compliant): http://service.scedc.caltech.edu/fdsnws/station/1/ Waveforms (FDSN Compliant): http://service.scedc.caltech.edu/fdsnws/dataselect/1/ Event Windowed Waveforms, phases: http://service.scedc.caltech.edu/webstp/ In an effort to assist researchers accessing catalogs from multiple seismic networks, the SCEDC has entered its earthquake parametric catalog into the ANSS Common Catalog (ComCat). Origin, phase, and magnitude information have been loaded. The SCEDC data holdings now include a double difference catalog (Hauksson et. al 2011) spanning 1981 through 2015 available via STP, and a focal mechanism catalog (Yang et al. 2011). As part of a NASA/AIST project in collaboration with JPL and SIO, the SCEDC now archives and distributes real time 1 Hz streams of GPS displacement solutions from the California Real Time Network. The SCEDC has implemented the Continuous Wave Buffer (CWB) to manage its waveform archive and allow users to access continuous data available within seconds of real time. This software was developed and currently in use at NEIC. SCEDC has moved its website (http://scedc.caltech.edu) to the Cloud. The Recent Earthquake Map and static web pages are now hosted by Amazon Web Services. This enables the web site to serve large number of users without competing for resources needed by SCSN/SCEDC mission critical operations.

  20. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  1. IMPROVEMENT SUPPORT RESEARCH OF LOCAL DISASTER PREVENTION POWER USING THE FIRE SPREADING SIMULATION SYSTEM IN CASE OF A BIG EARTHQUAKE

    Science.gov (United States)

    Futagami, Toru; Omoto, Shohei; Hamamoto, Kenichirou

    This research describes the risk communication towards improvement in the local disaster prevention power for Gobusho town in Marugame city which is only a high density city area in Kagawa Pref. Specifically, the key persons and authors of the area report the practice research towards improvement in the local disaster prevention power by the PDCA cycle of the area, such as formation of local voluntary disaster management organizations and implementation of an emergency drill, applying the fire spreading simulation system in case of a big earthquake. The fire spreading simulation system in case of the big earthquake which authors are developing describes the role and subject which have been achieved to BCP of the local community as a support system.

  2. Visualization of strong around motion calculated from the numerical simulation of Hyogo-ken Nanbu earthquake; Suchi simulation de miru Hyogoken nanbu jishin no kyoshindo

    Energy Technology Data Exchange (ETDEWEB)

    Furumura, T. [Hokkaido Univ. of Education, Sapporo (Japan); Koketsu, K. [The University of Tokyo, Tokyo (Japan). Earthquake Research Institute

    1996-10-01

    Hyogo-ken Nanbu earthquake with a focus in the Akashi straits has given huge earthquake damages in and around Awaji Island and Kobe City in 1995. It is clear that the basement structure, which is steeply deepened at Kobe City from Rokko Mountains towards the coast, and the focus under this related closely to the local generation of strong ground motion. Generation process of the strong ground motion was discussed using 2D and 3D numerical simulation methods. The 3D pseudospectral method was used for the calculation. Space of 51.2km{times}25.6km{times}25.6km was selected for the calculation. This space was discretized with the lattice interval of 200m. Consequently, it was found that the basement structure with a steeply deepened basement, soft and weak geological structure thickly deposited on the basement, and earthquake faults running under the boundary of base rock and sediments related greatly to the generation of strong ground motion. Numerical simulation can be expected to predict the strong ground motion by shallow earthquakes. 9 refs., 7 figs.

  3. Broadband Strong Ground Motion Simulation For a Potential Mw 7.1 Earthquake on The Enriquillo Fault in Haiti

    Science.gov (United States)

    Douilly, R.; Mavroeidis, G. P.; Calais, E.

    2015-12-01

    The devastating 2010 Haiti earthquake showed the need to be more vigilant toward mitigation for future earthquakes in the region. Previous studies have shown that this earthquake did not occur on the Enriquillo Fault, the main plate boundary fault running through the heavily populated Port-au-Prince region, but on the nearby and previously unknown Léogâne transpressional fault. Slip on that fault has increased stresses on the Enriquillo Fault mostly in the region closer to Port-au-Prince, the most populated city of the country. Here we investigate the ground shaking level in this region if a rupture similar to the Mw 7.0 2010 Haiti earthquake occurred on the Enriquillo fault. We use a finite element method and assumptions on regional stress to simulate low frequency dynamic rupture propagation for a 53 km long segment. We introduce some heterogeneity by creating two slip patches with shear traction 10% greater than the initial shear traction on the fault. The final slip distribution is similar in distribution and magnitude to previous finite fault inversions for the 2010 Haiti earthquake. The high-frequency ground motion components are calculated using the specific barrier model, and the hybrid synthetics are obtained by combining the low-frequencies (f 1Hz) from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. The average horizontal peak ground acceleration, computed at several sites of interest through Port-au-Prince, has a value of 0.35g. We also compute response spectra at those sites and compare them to the spectra from the microzonation study.

  4. Caltech/USGS Southern California Seismic Network (SCSN): Infrastructure upgrade to support Earthquake Early Warning (EEW)

    Science.gov (United States)

    Bhadha, R. J.; Hauksson, E.; Boese, M.; Felizardo, C.; Thomas, V. I.; Yu, E.; Given, D. D.; Heaton, T. H.; Hudnut, K. W.

    2013-12-01

    The SCSN is the modern digital ground motion seismic network in Southern California and performs the following tasks: 1) Operates remote seismic stations and the central data processing systems in Pasadena; 2) Generates and reports real-time products including location, magnitude, ShakeMap, aftershock probabilities and others; 3) Responds to FEMA, CalOES, media, and public inquiries about earthquakes; 4) Manages the production, archival, and distribution of waveforms, phase picks, and other data at the SCEDC; 5) Contributes to development and implementation of the demonstration EEW system called CISN ShakeAlert. Initially, the ShakeAlert project was funded through the US Geological Survey (USGS) and in early 2012, the Gordon and Betty Moore Foundation provided three years of new funding for EEW research and development for the US west coast. Recently, we have also received some Urban Areas Security Initiative (UASI) funding to enhance the EEW capabilities for the local UASI region by making our system overall faster, more reliable and redundant than the existing system. The additional and upgraded stations will be capable of decreasing latency and ensuring data delivery by using more reliable and redundant telemetry pathways. Overall, this will enhance the reliability of the earthquake early warnings by providing denser station coverage and more resilient data centers than before. * Seismic Datalogger upgrade: replaces existing dataloggers with modern equipment capable of sending one-second uncompressed packets and utilizing redundant Ethernet telemetry. * GPS upgrade: replaces the existing GPS receivers and antennas, especially at "zipper array" sites near the major faults, with receivers that perform on-board precise point positioning to calculate position and velocity in real time and stream continuous data for use in EEW calculations. * New co-located seismic/GPS stations: increases station density and reduces early warning delays that are incurred by travel

  5. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    appropriate. The quadratic assignment procedure ( QAP ) (Krackhardt, 1987) could be used to compare the correlation between networks; however, the...Social roles and the evolution of networks in extreme and isolated environments. Mathematical Sociology, 27: 89-121. Krackhardt, D. (1987). QAP

  6. Efficient dam break flood simulation methods for developing a preliminary evacuation plan after the Wenchuan Earthquake

    Directory of Open Access Journals (Sweden)

    Y. Li

    2012-01-01

    Full Text Available The Xiaojiaqiao barrier lake, which was the second largest barrier lake formed by the Wenchuan Earthquake had seriously threatened the lives and property of the population downstream. The lake was finally dredged successfully on 7 June 2008. Because of the limited time available to conduct an inundation potential analysis and make an evacuation plan, barrier lake information extraction and real-time dam break flood simulation should be carried out quickly, integrating remote sensing and geographic information system (GIS techniques with hydrologic/hydraulic analysis. In this paper, a technical framework and several key techniques for this real-time preliminary evacuation planning are introduced. An object-oriented method was used to extract hydrological information on the barrier lake from unmanned aerial vehicle (UAV remote sensing images. The real-time flood routine was calculated by using shallow-water equations, which were solved by means of a finite volume scheme on multiblock structured grids. The results of the hydraulic computations are visualized and analyzed in a 3-D geographic information system for inundation potential analysis, and an emergency response plan is made. The results show that if either a full-break or a half-break situation had occurred for the Chapinghe barrier lake on 19 May 2008, then the Xiaoba Town region and the Sangzao Town region would have been affected, but the downstream towns would have been less influenced. Preliminary evacuation plans under different dam break situations can be effectively made using these methods.

  7. Numerical Simulation of Tire Reinforced Sand behind Retaining Wall Under Earthquake Excitation

    Directory of Open Access Journals (Sweden)

    A. Lazizi

    2014-04-01

    Full Text Available This paper studies the numerical simulations of retaining walls supporting tire reinforced sand subjected to El Centro earthquake excitation using finite element analysis. For this, four cases are studied: cantilever retaining wall supporting sand under static and dynamical excitation, and cantilever retaining wall supporting waste tire reinforced sand under static and dynamical excitation. Analytical external stability analyses of the selected retaining wall show that, for all four cases, the factors of safety for base sliding and overturning are less than default minimum values. Numerical analyses show that there are no large differences between the case of wall supporting waste tire reinforced sand and the case of wall supporting sand for static loading. Under seismic excitation, the higher value of Von Mises stress for the case of retaining wall supporting waste tire reinforced sand is 3.46 times lower compared to the case of retaining wall supporting sand. The variation of horizontal displacement (U1 and vertical displacement (U2 near the retaining wall, with depth, are also presented.

  8. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  9. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  10. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  11. Ground-Motion Simulations of Scenario Earthquakes on the Hayward Fault

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Graves, R; Larsen, S; Ma, S; Rodgers, A; Ponce, D; Schwartz, D; Simpson, R; Graymer, R

    2009-03-09

    We compute ground motions in the San Francisco Bay area for 35 Mw 6.7-7.2 scenario earthquake ruptures involving the Hayward fault. The modeled scenarios vary in rupture length, hypocenter, slip distribution, rupture speed, and rise time. This collaborative effort involves five modeling groups, using different wave propagation codes and domains of various sizes and resolutions, computing long-period (T > 1-2 s) or broadband (T > 0.1 s) synthetic ground motions for overlapping subsets of the suite of scenarios. The simulations incorporate 3-D geologic structure and illustrate the dramatic increase in intensity of shaking for Mw 7.05 ruptures of the entire Hayward fault compared with Mw 6.76 ruptures of the southern two-thirds of the fault. The area subjected to shaking stronger than MMI VII increases from about 10% of the San Francisco Bay urban area in the Mw 6.76 events to more than 40% of the urban area for the Mw 7.05 events. Similarly, combined rupture of the Hayward and Rodgers Creek faults in a Mw 7.2 event extends shaking stronger than MMI VII to nearly 50% of the urban area. For a given rupture length, the synthetic ground motions exhibit the greatest sensitivity to the slip distribution and location inside or near the edge of sedimentary basins. The hypocenter also exerts a strong influence on the amplitude of the shaking due to rupture directivity. The synthetic waveforms exhibit a weaker sensitivity to the rupture speed and are relatively insensitive to the rise time. The ground motions from the simulations are generally consistent with Next Generation Attenuation ground-motion prediction models but contain long-period effects, such as rupture directivity and amplification in shallow sedimentary basins that are not fully captured by the ground-motion prediction models.

  12. Methods, Computational Platform, Verification, and Application of Earthquake-Soil-Structure-Interaction Modeling and Simulation

    Science.gov (United States)

    Tafazzoli, Nima

    Seismic response of soil-structure systems has attracted significant attention for a long time. This is quite understandable with the size and the complexity of soil-structure systems. The focus of three important aspects of ESSI modeling could be on consistent following of input seismic energy and a number of energy dissipation mechanisms within the system, numerical techniques used to simulate dynamics of ESSI, and influence of uncertainty of ESSI simulations. This dissertation is a contribution to development of one such tool called ESSI Simulator. The work is being done on extensive verified and validated suite for ESSI Simulator. Verification and validation are important for high fidelity numerical predictions of behavior of complex systems. This simulator uses finite element method as a numerical tool to obtain solutions for large class of engineering problems such as liquefaction, earthquake-soil-structure-interaction, site effect, piles, pile group, probabilistic plasticity, stochastic elastic-plastic FEM, and detailed large scale parallel models. Response of full three-dimensional soil-structure-interaction simulation of complex structures is evaluated under the 3D wave propagation. Domain-Reduction-Method is used for applying the forces as a two-step procedure for dynamic analysis with the goal of reducing the large size computational domain. The issue of damping of the waves at the boundary of the finite element models is studied using different damping patterns. This is used at the layer of elements outside of the Domain-Reduction-Method zone in order to absorb the residual waves coming out of the boundary layer due to structural excitation. Extensive parametric study is done on dynamic soil-structure-interaction of a complex system and results of different cases in terms of soil strength and foundation embedment are compared. High efficiency set of constitutive models in terms of computational time are developed and implemented in ESSI Simulator

  13. Simulated evolution of signal transduction networks.

    Directory of Open Access Journals (Sweden)

    Mohammad Mobashir

    Full Text Available Signal transduction is the process of routing information inside cells when receiving stimuli from their environment that modulate the behavior and function. In such biological processes, the receptors, after receiving the corresponding signals, activate a number of biomolecules which eventually transduce the signal to the nucleus. The main objective of our work is to develop a theoretical approach which will help to better understand the behavior of signal transduction networks due to changes in kinetic parameters and network topology. By using an evolutionary algorithm, we designed a mathematical model which performs basic signaling tasks similar to the signaling process of living cells. We use a simple dynamical model of signaling networks of interacting proteins and their complexes. We study the evolution of signaling networks described by mass-action kinetics. The fitness of the networks is determined by the number of signals detected out of a series of signals with varying strength. The mutations include changes in the reaction rate and network topology. We found that stronger interactions and addition of new nodes lead to improved evolved responses. The strength of the signal does not play any role in determining the response type. This model will help to understand the dynamic behavior of the proteins involved in signaling pathways. It will also help to understand the robustness of the kinetics of the output response upon changes in the rate of reactions and the topology of the network.

  14. Earthquake Monitoring with the MyShake Global Smartphone Seismic Network

    Science.gov (United States)

    Inbal, A.; Kong, Q.; Allen, R. M.; Savran, W. H.

    2017-12-01

    Smartphone arrays have the potential for significantly improving seismic monitoring in sparsely instrumented urban areas. This approach benefits from the dense spatial coverage of users, as well as from communication and computational capabilities built into smartphones, which facilitate big seismic data transfer and analysis. Advantages in data acquisition with smartphones trade-off with factors such as the low-quality sensors installed in phones, high noise levels, and strong network heterogeneity, all of which limit effective seismic monitoring. Here we utilize network and array-processing schemes to asses event detectability with the MyShake global smartphone network. We examine the benefits of using this network in either triggered or continuous modes of operation. A global database of ground motions measured on stationary phones triggered by M2-6 events is used to establish detection probabilities. We find that the probability of detecting an M=3 event with a single phone located 20 nearby phones closely match the regional catalog locations. We use simulated broadband seismic data to examine how location uncertainties vary with user distribution and noise levels. To this end, we have developed an empirical noise model for the metropolitan Los-Angeles (LA) area. We find that densities larger than 100 stationary phones/km2 are required to accurately locate M 2 events in the LA basin. Given the projected MyShake user distribution, that condition may be met within the next few years.

  15. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  16. A new C++ implemented feed forward neural network simulator

    Directory of Open Access Journals (Sweden)

    J. Sütő

    2013-12-01

    Full Text Available This paper presents the implementation of a simulator application for feed forward neural networks which was made in Qt application framework. The paper demonstrates the object oriented design and the performance of the software. The main topics cover the class organization and some test results where the Matlab neural network toolbox was used as reference.

  17. A new C++ implemented feed forward neural network simulator

    OpenAIRE

    J. Sütő; S. Oniga

    2013-01-01

    This paper presents the implementation of a simulator application for feed forward neural networks which was made in Qt application framework. The paper demonstrates the object oriented design and the performance of the software. The main topics cover the class organization and some test results where the Matlab neural network toolbox was used as reference.

  18. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  19. Modeling aftershock rates using simulations of spontaneous earthquake nucleation on rate and state faults

    Science.gov (United States)

    Kaneko, Y.; Lapusta, N.

    2005-12-01

    Large earthquakes are followed by increased seismic activity, usually referred to as aftershock sequences, that decays to the background rate over time. The decay of aftershocks is well-described empirically by Omori's law. Dieterich (1994) proposed that Omori's law could result from perturbing, by static stress steps, a population of nucleation sites governed by laboratory-derived rate and state friction. He used one-degree-of-freedom spring-slider system to represent elastic interactions and made a simplified assumption about frictional behavior during nucleation. The model was further explored in a number of studies (i.e., Gomberg et al., 2000) and used to interpret observations (i.e., Toda et al., 1998). In this study, we explore the consequences of Dieterich's approach using models of faults embedded in elastic continuum, where the nucleation process can be more complicated than assumed in Dieterich's model. Our approach is different from previous studies of aftershock rates with rate and state friction in that here, nucleation processes are simulated as a part of spontaneously occurring earthquake sequences in continuum fault models. We use two 2D models of a vertical strike-slip fault, the depth-variable model (Rice, 1993; Lapusta at el., 2000) and the crustal-plane model (Myers et al., 1996). We find that nucleation processes in continuum models and the resulting aftershock rates are well-described by the model of Dieterich (1994) when Dieterich's assumption that the state variable of the rate and state friction law is significantly behind its steady-state value holds during the entire nucleation process. On the contrary, aftershock rates in models where the state variable assumption is violated for a significant portion of the nucleation process exhibit behavior different from Dieterich's model. The state variable assumption is significantly violated, and hence the aftershock rates are affected, when stress heterogeneities are present within the nucleation

  20. Detection of Repeating Earthquakes within the Cascadia Subduction Zone Using 2013-2014 Cascadia Initiative Amphibious Network Data

    Science.gov (United States)

    Kenefic, L.; Morton, E.; Bilek, S.

    2017-12-01

    It is well known that subduction zones create the largest earthquakes in the world, like the magnitude 9.5 Chile earthquake in 1960, or the more recent 9.1 magnitude Japan earthquake in 2011, both of which are in the top five largest earthquakes ever recorded. However, off the coast of the Pacific Northwest region of the U.S., the Cascadia subduction zone (CSZ) remains relatively quiet and modern seismic instruments have not recorded earthquakes of this size in the CSZ. The last great earthquake, a magnitude 8.7-9.2, occurred in 1700 and is constrained by written reports of the resultant tsunami in Japan and dating a drowned forest in the U.S. Previous studies have suggested the margin is most likely segmented along-strike. However, variations in frictional conditions in the CSZ fault zone are not well known. Geodetic modeling indicates that the locked seismogenic zone is likely completely offshore, which may be too far from land seismometers to adequately detect related seismicity. Ocean bottom seismometers, as part of the Cascadia Initiative Amphibious Network, were installed directly above the inferred seismogenic zone, which we use to better detect small interplate seismicity. Using the subspace detection method, this study looks to find new seismogenic zone earthquakes. This subspace detection method uses multiple previously known event templates concurrently to scan through continuous seismic data. Template events that make up the subspace are chosen from events in existing catalogs that likely occurred along the plate interface. Corresponding waveforms are windowed on the nearby Cascadia Initiative ocean bottom seismometers and coastal land seismometers for scanning. Detections that are found by the scan are similar to the template waveforms based upon a predefined threshold. Detections are then visually examined to determine if an event is present. The presence of repeating event clusters can indicate persistent seismic patches, likely corresponding to

  1. The Australian Seismometers in Schools Network: promoting geoscience to high school students through real-time earthquake data recording

    Science.gov (United States)

    Sambridge, Malcolm; Balfour, Natalie; Salmon, Michelle; ONeill, Craig

    2013-04-01

    The Australian Seismometers in Schools program (AuSIS) has just completed year one of its initial four-year program. The year has been filled with excitement as we completed installing pilot instruments in schools, launched the program nationally and received over 110 "Expressions of Interest" from schools around Australia. The data quality has exceeded expectations with schools recording local earthquakes down to magnitude 1, and large distant earthquakes. Some students participate in the program by looking up earthquake locations on maps and learning about geography, while other more advanced students have been investigating the frequency characteristics and sources of noise at their school. Both students and the schools are particularly proud that their instrument is contributing to the global scientific community and are actively incorporating seismology into the school curriculum. AuSIS is funded by the Education component of AuScope Australian Geophysical Observing System. By mid-2014 we will build a network of 40 seismometers in high schools across the nation to provide real-time monitoring of the Australian continent and raise awareness of geoscience through observing our dynamic earth in motion. This program is unique to other seismometers in schools programs as it uses professional seismometers to provide research quality data to the seismological community. The AuSIS project's educational aims are to: • Raise community awareness of earthquakes; • Raise awareness of seismology and geoscience, as a field of study; • Promote science as a possible career; • Provide a tool to teachers to assist in teaching physics and earth science. The data schools collect is useful to researchers and will complement networks run by government and state agencies due to the high quality of the instruments and will be stored at internationally accessible and supported data management centres, such as IRIS. Data collected during the pilot program have provided clear

  2. Evaluating Australian football league player contributions using interactive network simulation.

    Science.gov (United States)

    Sargent, Jonathan; Bedford, Anthony

    2013-01-01

    This paper focuses on the contribution of Australian Football League (AFL) players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line ". Key pointsA simulated interaction matrix for Australian Rules football players is proposedThe simulations were carried out by fitting unique negative binomial distributions to each player pairing in a sideEigenvector centrality was calculated for each player in a simulated matrix, then for the teamThe team centrality measure adequately predicted the team's winning marginA player's net effect on margin could hence be estimated by replacing him in

  3. Simulating dam-breach flood scenarios of the Tangjiashan landslide dam induced by the Wenchuan Earthquake

    Directory of Open Access Journals (Sweden)

    X. Fan

    2012-10-01

    Full Text Available Floods from failures of landslide dams can pose a hazard to people and property downstream, which have to be rapidly assessed and mitigated in order to reduce the potential risk. The Tangjiashan landslide dam induced by the Mw = 7.9 2008 Wenchuan earthquake had impounded the largest lake in the earthquake affected area with an estimated volume of 3 × 108 m3, and the potential catastrophic dam breach posed a serious threat to more than 2.5 million people in downstream towns and Mianyang city, located 85 km downstream. Chinese authorities had to evacuate parts of the city until the Tangjiashan landslide dam was artificially breached by a spillway, and the lake was drained. We propose an integrated approach to simulate the dam-breach floods for a number of possible scenarios, to evaluate the severity of the threat to Mianyang city. Firstly, the physically-based BREACH model was applied to predict the flood hydrographs at the dam location, which were calibrated with observational data of the flood resulting from the artificial breaching. The output hydrographs from this model were inputted into the 1-D–2-D SOBEK hydrodynamic model to simulate the spatial variations in flood parameters. The simulated flood hydrograph, peak discharge and peak arrival time at the downstream towns fit the observations. Thus this approach is capable of providing reliable predictions for the decision makers to determine the mitigation plans. The sensitivity analysis of the BREACH model input parameters reveals that the average grain size, the unit weight and porosity of the dam materials are the most sensitive parameters. The variability of the dam material properties causes a large uncertainty in the estimation of the peak flood discharge and peak arrival time, but has little influence on the flood inundation area and flood depth downstream. The effect of cascading breaches of smaller dams downstream of the Tangjiashan dam was

  4. Earthquke-related variation in Schumann Resonance (SR) spectra and Q-bursts as simulated with a global TDTE Network

    Science.gov (United States)

    Yu, H.; Williams, E. R.

    2014-12-01

    The monitoring of earthquakes with SR has been reported by Nickolaenko and Hayakawa (Nickolaenko and Hayakawa 2014, 2006, Hayakawa 2005). Despite the presence of many SR observatories globally, the observation of SR anomalies caused by earthquakes is rare. And the physical mechanism for the SR anomaly is not clear. Further attention to methods for observing SR anomalies caused by earthquakes is needed. A simulation approach based on Nelson's 2DTelegraph Equation (TDTE) Network (Nelson, MIT doctoral thesis, 1967) is developed. The Earth-ionosphere cavity is discretized into 24×24 tesserae. This network approach is more flexible than an analytical model, especially for a model with day-night asymmetry. The relation of the magnitude of the anomaly and the geometrical arrangement among source, receiver and disturbed zone is discussed for the uniform model. The perturbed zone size is computed according to the estimated size of the earthquake preparation zone. For example, the radius of the perturbed zone is about 1000km when the earthquake magnitude is about Ms=7.0. The intensity variations for the first four SR modes are compared between perturbed and unperturbed models. In addition, the spectral characteristics at different distances between source and disturbed zone are analysed. Interestingly, the electric field shows different variation than the magnetic field in response to the localized perturbation. For the uniform model with single Q-burst source, when the height of the local ionosphere is decreased, the electric field is increased and reaches nearly 50% in intensity in the perturbed zone in the uniform model. However, in contrast, the magnetic response is far less pronounced. It shows almost no variation. But for multisource excitation, the electric field and magnetic field both show dramatic response which reaches nearly 100% variation for some special modes. And the big variation is not restricted to the perturbed zone. The variations show complicated

  5. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  6. Three-dimensional dynamic rupture simulations across interacting faults: The Mw7.0, 2010, Haiti earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.

    2015-02-01

    The mechanisms controlling rupture propagation between fault segments during a large earthquake are key to the hazard posed by fault systems. Rupture initiation on a smaller fault sometimes transfers to a larger fault, resulting in a significant event (e.g., 2002 M7.9 Denali USA and 2010 M7.1 Darfield New Zealand earthquakes). In other cases rupture is constrained to the initial fault and does not transfer to nearby faults, resulting in events of more moderate magnitude. This was the case of the 1989 M6.9 Loma Prieta and 2010 M7.0 Haiti earthquakes which initiated on reverse faults abutting against a major strike-slip plate boundary fault but did not propagate onto it. Here we investigate the rupture dynamics of the Haiti earthquake, seeking to understand why rupture propagated across two segments of the Léogâne fault but did not propagate to the adjacent Enriquillo Plantain Garden Fault, the major 200 km long plate boundary fault cutting through southern Haiti. We use a finite element model to simulate propagation of rupture on the Léogâne fault, varying friction and background stress to determine the parameter set that best explains the observed earthquake sequence, in particular, the ground displacement. The two slip patches inferred from finite fault inversions are explained by the successive rupture of two fault segments oriented favorably with respect to the rupture propagation, while the geometry of the Enriquillo fault did not allow shear stress to reach failure.

  7. Ground motion modeling of Hayward fault scenario earthquakes II:Simulation of long-period and broadband ground motions

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B T; Graves, R W; Rodgers, A; Brocher, T M; Simpson, R W; Dreger, D; Petersson, N A; Larsen, S C; Ma, S; Jachens, R C

    2009-11-04

    We simulate long-period (T > 1.0-2.0 s) and broadband (T > 0.1 s) ground motions for 39 scenarios earthquakes (Mw 6.7-7.2) involving the Hayward, Calaveras, and Rodgers Creek faults. For rupture on the Hayward fault we consider the effects of creep on coseismic slip using two different approaches, both of which reduce the ground motions compared with neglecting the influence of creep. Nevertheless, the scenario earthquakes generate strong shaking throughout the San Francisco Bay area with about 50% of the urban area experiencing MMI VII or greater for the magnitude 7.0 scenario events. Long-period simulations of the 2007 Mw 4.18 Oakland and 2007 Mw 4.5 Alum Rock earthquakes show that the USGS Bay Area Velocity Model version 08.3.0 permits simulation of the amplitude and duration of shaking throughout the San Francisco Bay area, with the greatest accuracy in the Santa Clara Valley (San Jose area). The ground motions exhibit a strong sensitivity to the rupture length (or magnitude), hypocenter (or rupture directivity), and slip distribution. The ground motions display a much weaker sensitivity to the rise time and rupture speed. Peak velocities, peak accelerations, and spectral accelerations from the synthetic broadband ground motions are, on average, slightly higher than the Next Generation Attenuation (NGA) ground-motion prediction equations. We attribute at least some of this difference to the relatively narrow width of the Hayward fault ruptures. The simulations suggest that the Spudich and Chiou (2008) directivity corrections to the NGA relations could be improved by including a dependence on the rupture speed and increasing the areal extent of rupture directivity with period. The simulations also indicate that the NGA relations may under-predict amplification in shallow sedimentary basins.

  8. Simulation studies of a wide area health care network.

    Science.gov (United States)

    McDaniel, J G

    1994-01-01

    There is an increasing number of efforts to install wide area health care networks. Some of these networks are being built to support several applications over a wide user base consisting primarily of medical practices, hospitals, pharmacies, medical laboratories, payors, and suppliers. Although on-line, multi-media telecommunication is desirable for some purposes such as cardiac monitoring, store-and-forward messaging is adequate for many common, high-volume applications. Laboratory test results and payment claims, for example, can be distributed using electronic messaging networks. Several network prototypes have been constructed to determine the technical problems and to assess the effectiveness of electronic messaging in wide area health care networks. Our project, Health Link, developed prototype software that was able to use the public switched telephone network to exchange messages automatically, reliably and securely. The network could be configured to accommodate the many different traffic patterns and cost constraints of its users. Discrete event simulations were performed on several network models. Canonical star and mesh networks, that were composed of nodes operating at steady state under equal loads, were modeled. Both topologies were found to support the throughput of a generic wide area health care network. The mean message delivery time of the mesh network was found to be less than that of the star network. Further simulations were conducted for a realistic large-scale health care network consisting of 1,553 doctors, 26 hospitals, four medical labs, one provincial lab and one insurer. Two network topologies were investigated: one using predominantly peer-to-peer communication, the other using client-server communication.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. Importance of simulation tools for the planning of optical network

    Science.gov (United States)

    Martins, Indayara B.; Martins, Yara; Rudge, Felipe; Moschimı, Edson

    2015-10-01

    The main proposal of this work is to show the importance of using simulation tools to project optical networks. The simulation method supports the investigation of several system and network parameters, such as bit error rate, blocking probability as well as physical layer issues, such as attenuation, dispersion, and nonlinearities, as these are all important to evaluate and validate the operability of optical networks. The work was divided into two parts: firstly, physical layer preplanning was proposed for the distribution of amplifiers and compensating for the attenuation and dispersion effects in span transmission; in this part, we also analyzed the quality of the transmitted signal. In the second part, an analysis of the transport layer was completed, proposing wavelength distribution planning, according to the total utilization of each link. The main network parameters used to evaluate the transport and physical layer design were delay (latency), blocking probability, and bit error rate (BER). This work was carried out with commercially available simulation tools.

  10. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  11. Products and Services Available from the Southern California Earthquake Data Center (SCEDC) and the Southern California Seismic Network (SCSN)

    Science.gov (United States)

    Yu, E.; Bhaskaran, A.; Chen, S.; Chowdhury, F. R.; Meisenhelter, S.; Hutton, K.; Given, D.; Hauksson, E.; Clayton, R. W.

    2010-12-01

    Currently the SCEDC archives continuous and triggered data from nearly 5000 data channels from 425 SCSN recorded stations, processing and archiving an average of 12,000 earthquakes each year. The SCEDC provides public access to these earthquake parametric and waveform data through its website www.data.scec.org and through client applications such as STP and DHI. This poster will describe the most significant developments at the SCEDC in the past year. Updated hardware: ● The SCEDC has more than doubled its waveform file storage capacity by migrating to 2 TB disks. New data holdings: ● Waveform data: Beginning Jan 1, 2010 the SCEDC began continuously archiving all high-sample-rate strong-motion channels. All seismic channels recorded by SCSN are now continuously archived and available at SCEDC. ● Portable data from El Mayor Cucapah 7.2 sequence: Seismic waveforms from portable stations installed by researchers (contributed by Elizabeth Cochran, Jamie Steidl, and Octavio Lazaro-Mancilla) have been added to the archive and are accessible through STP either as continuous data or associated with events in the SCEDC earthquake catalog. This additional data will help SCSN analysts and researchers improve event locations from the sequence. ● Real time GPS solutions from El Mayor Cucapah 7.2 event: Three component 1Hz seismograms of California Real Time Network (CRTN) GPS stations, from the April 4, 2010, magnitude 7.2 El Mayor-Cucapah earthquake are available in SAC format at the SCEDC. These time series were created by Brendan Crowell, Yehuda Bock, the project PI, and Mindy Squibb at SOPAC using data from the CRTN. The El Mayor-Cucapah earthquake demonstrated definitively the power of real-time high-rate GPS data: they measure dynamic displacements directly, they do not clip and they are also able to detect the permanent (coseismic) surface deformation. ● Triggered data from the Quake Catcher Network (QCN) and Community Seismic Network (CSN): The SCEDC in

  12. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  13. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Glaser Johann

    2008-01-01

    Full Text Available Abstract The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  14. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  15. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    International Nuclear Information System (INIS)

    AllamehZadeh, Mostafa

    2011-01-01

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0–6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  16. Discrimination Analysis of Earthquakes and Man-Made Events Using ARMA Coefficients Determination by Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com [International Institute of Earthquake Engineering and Seismology (Iran, Islamic Republic of)

    2011-12-15

    A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.

  17. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  18. A fundamental study using Monte-Carlo simulation technique on the effect on the earthquake response of subsurface layers caused by uncertainy of soil properties

    International Nuclear Information System (INIS)

    Hata, Akihito; Shiba, Yukio

    2009-01-01

    The standard for Probabilistic Safety Assessment of Nuclear Power Plant on earthquakes published by Atomic Energy Society of Japan in 2007 states that the effect of uncertainy of soil properties on the earthquake response of subsurface layers should be assessed with conducting Monte-Carlo simulations of equivalent linear analysis. This paper presents a fundamental study on the effect of uncertainty of dynamic soil properties on the earthquake response with equivalent linear approach. A series of Monte-Carlo simulations of earthquake response analysis of a simple one-dimensional soil layer model have been conducted, where uncertainty of initial shear modulus G 0 , strain dependency of G/G 0 -γ and h-γ are considered. Through a series of simulations, it is demonstrated that although the average of maximum response of the subsurface top layer increases as input earthquake motion increases, the coefficient of variance of them does not necessarily increases, and that G/G 0 -γ relationship is the most influential factor among the concerned parameters. And also, it is shown that the maximum response of ground surface plotted against the peak frequency of the frequency response function calculated with equivalent linear analysis under converged condition, distributes around the response spectrum curve of the input earthquake motion so that the maximum response can be roughly estimated from the response spectrum curve. Finally, applicability of two-point-estimate technique is examined with being compared with Monte-Carlo simulation results. (author)

  19. 3-D dynamic rupture simulations of the 2016 Kumamoto, Japan, earthquake

    Science.gov (United States)

    Urata, Yumi; Yoshida, Keisuke; Fukuyama, Eiichi; Kubo, Hisahiko

    2017-11-01

    Using 3-D dynamic rupture simulations, we investigated the 2016 Mw7.1 Kumamoto, Japan, earthquake to elucidate why and how the rupture of the main shock propagated successfully, assuming a complicated fault geometry estimated on the basis of the distributions of the aftershocks. The Mw7.1 main shock occurred along the Futagawa and Hinagu faults. Within 28 h before the main shock, three M6-class foreshocks occurred. Their hypocenters were located along the Hinagu and Futagawa faults, and their focal mechanisms were similar to that of the main shock. Therefore, an extensive stress shadow should have been generated on the fault plane of the main shock. First, we estimated the geometry of the fault planes of the three foreshocks as well as that of the main shock based on the temporal evolution of the relocated aftershock hypocenters. We then evaluated the static stress changes on the main shock fault plane that were due to the occurrence of the three foreshocks, assuming elliptical cracks with constant stress drops on the estimated fault planes. The obtained static stress change distribution indicated that Coulomb failure stress change (ΔCFS) was positive just below the hypocenter of the main shock, while the ΔCFS in the shallow region above the hypocenter was negative. Therefore, these foreshocks could encourage the initiation of the main shock rupture and could hinder the propagation of the rupture toward the shallow region. Finally, we conducted 3-D dynamic rupture simulations of the main shock using the initial stress distribution, which was the sum of the static stress changes caused by these foreshocks and the regional stress field. Assuming a slip-weakening law with uniform friction parameters, we computed 3-D dynamic rupture by varying the friction parameters and the values of the principal stresses. We obtained feasible parameter ranges that could reproduce the characteristic features of the main shock rupture revealed by seismic waveform analyses. We also

  20. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  1. SiGNet: A signaling network data simulator to enable signaling network inference.

    Science.gov (United States)

    Coker, Elizabeth A; Mitsopoulos, Costas; Workman, Paul; Al-Lazikani, Bissan

    2017-01-01

    Network models are widely used to describe complex signaling systems. Cellular wiring varies in different cellular contexts and numerous inference techniques have been developed to infer the structure of a network from experimental data of the network's behavior. To objectively identify which inference strategy is best suited to a specific network, a gold standard network and dataset are required. However, suitable datasets for benchmarking are difficult to find. Numerous tools exist that can simulate data for transcriptional networks, but these are of limited use for the study of signaling networks. Here, we describe SiGNet (Signal Generator for Networks): a Cytoscape app that simulates experimental data for a signaling network of known structure. SiGNet has been developed and tested against published experimental data, incorporating information on network architecture, and the directionality and strength of interactions to create biological data in silico. SiGNet is the first tool to simulate biological signaling data, enabling an accurate and systematic assessment of inference strategies. SiGNet can also be used to produce preliminary models of key biological pathways following perturbation.

  2. SiGNet: A signaling network data simulator to enable signaling network inference.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Coker

    Full Text Available Network models are widely used to describe complex signaling systems. Cellular wiring varies in different cellular contexts and numerous inference techniques have been developed to infer the structure of a network from experimental data of the network's behavior. To objectively identify which inference strategy is best suited to a specific network, a gold standard network and dataset are required. However, suitable datasets for benchmarking are difficult to find. Numerous tools exist that can simulate data for transcriptional networks, but these are of limited use for the study of signaling networks. Here, we describe SiGNet (Signal Generator for Networks: a Cytoscape app that simulates experimental data for a signaling network of known structure. SiGNet has been developed and tested against published experimental data, incorporating information on network architecture, and the directionality and strength of interactions to create biological data in silico. SiGNet is the first tool to simulate biological signaling data, enabling an accurate and systematic assessment of inference strategies. SiGNet can also be used to produce preliminary models of key biological pathways following perturbation.

  3. Large scale earthquake simulator of 3-D (simultaneous X-Y-Z direction)

    International Nuclear Information System (INIS)

    Shiraki, Kazuhiro; Inoue, Masao

    1983-01-01

    Japan is the country where earthquakes are frequent, accordingly it is necessary to examine sufficiently the safety against earthquakes of important machinery and equipment such as nuclear and thermal power plants and chemical plants. For this purpose, aseismatic safety is evaluated by mounting an actual thing or a model on a vibration table and vibrating it by the magnitude several times as large as actual earthquakes. The vibration tables used so far can vibrate only in one direction or in two directions simultaneously, but this time, a three-dimensional vibration table was completed, which can vibrate in three directions simultaneously with arbitrary wave forms, respectively. By the advent of this vibration table, aseismatic test can be carried out, using the earthquake waves close to actual ones. It is expected that this vibration table achieves large role for the improvement of aseismatic reliability of nuclear power machinery and equipment. When a large test body is vibrated on the vibration table, the center of gravity of the test body and the point of action of vibrating force are different, therefore the rotating motion around three axes is added to the motion in three axial directions, and these motions must be controlled so as to realize three-dimensional earthquake motion. The main particulars and the construction of the vibration table, the mechanism of three-direction vibration, the control of the table and the results of test of the table are reported. (Kako, I.)

  4. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  5. Estimation of peak ground accelerations for Mexican subduction zone earthquakes using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Silvia R; Romo, Miguel P; Mayoral, Juan M [Instituto de Ingenieria, Universidad Nacional Autonoma de Mexico, Mexico D.F. (Mexico)

    2007-01-15

    An extensive analysis of the strong ground motion Mexican data base was conducted using Soft Computing (SC) techniques. A Neural Network NN is used to estimate both orthogonal components of the horizontal (PGAh) and vertical (PGAv) peak ground accelerations measured at rock sites during Mexican subduction zone earthquakes. The work discusses the development, training, and testing of this neural model. Attenuation phenomenon was characterized in terms of magnitude, epicentral distance and focal depth. Neural approximators were used instead of traditional regression techniques due to their flexibility to deal with uncertainty and noise. NN predictions follow closely measured responses exhibiting forecasting capabilities better than those of most established attenuation relations for the Mexican subduction zone. Assessment of the NN, was also applied to subduction zones in Japan and North America. For the database used in this paper the NN and the-better-fitted- regression approach residuals are compared. [Spanish] Un analisis exhaustivo de la base de datos mexicana de sismos fuertes se llevo a cabo utilizando tecnicas de computo aproximado, SC (soft computing). En particular, una red neuronal, NN, es utilizada para estimar ambos componentes ortogonales de la maxima aceleracion horizontal del terreno, PGAh, y la vertical, PGAv, medidas en sitios en roca durante terremotos generados en la zona de subduccion de la Republica Mexicana. El trabajo discute el desarrollo, entrenamiento, y prueba de este modelo neuronal. El fenomeno de atenuacion fue caracterizado en terminos de la magnitud, la distancia epicentral y la profundidad focal. Aproximaciones neuronales fueron utilizadas en lugar de tecnicas de regresion tradicionales por su flexibilidad para tratar con incertidumbre y ruido en los datos. La NN sigue de cerca la respuesta medida exhibiendo capacidades predictivas mejores que las mostradas por muchas de las relaciones de atenuacion establecidas para la zona de

  6. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  7. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System...... Identification, Prediction, Simulation and Control of a dynamic, non-linear and noisy process. Further, the difficulties to control a practical non-linear laboratory process in a satisfactory way by using a traditional controller are overcomed by using a trained neural network to perform non-linear System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  8. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  9. ASSESSING URBAN STREETS NETWORK VULNERABILITY AGAINST EARTHQUAKE USING GIS – CASE STUDY: 6TH ZONE OF TEHRAN

    Directory of Open Access Journals (Sweden)

    A. Rastegar

    2017-09-01

    Full Text Available Great earthquakes cause huge damages to human life. Street networks vulnerability makes the rescue operation to encounter serious difficulties especially at the first 72 hours after the incident. Today, physical expansion and high density of great cities, due to narrow access roads, large distance from medical care centers and location at areas with high seismic risk, will lead to a perilous and unpredictable situation in case of the earthquake. Zone # 6 of Tehran, with 229,980 population (3.6% of city population and 20 km2 area (3.2% of city area, is one of the main municipal zones of Tehran (Iran center of statistics, 2006. Major land-uses, like ministries, embassies, universities, general hospitals and medical centers, big financial firms and so on, manifest the high importance of this region on local and national scale. In this paper, by employing indexes such as access to medical centers, street inclusion, building and population density, land-use, PGA and building quality, vulnerability degree of street networks in zone #6 against the earthquake is calculated through overlaying maps and data in combination with IHWP method and GIS. This article concludes that buildings alongside the streets with high population and building density, low building quality, far to rescue centers and high level of inclusion represent high rate of vulnerability, compared with other buildings. Also, by moving on from north to south of the zone, the vulnerability increases. Likewise, highways and streets with substantial width and low building and population density hold little values of vulnerability.

  10. Co-Seismic Effect of the 2011 Japan Earthquake on the Crustal Movement Observation Network of China

    Directory of Open Access Journals (Sweden)

    Shaomin Yang

    2013-01-01

    Full Text Available Great earthquakes introduce measurable co-seismic displacements over regions of hundreds and thousands of kilometers in width, which, if not accounted for, may significantly bias the long-term surface velocity field constrained by GPS observations performed during a period encompassing that event. Here, we first present an estimation of the far-field co-seismic off-sets associated with the 2011 Japan Mw 9.0 earthquake using GPS measurements from the Crustal Movement Observation Network of China (CMONOC in North China. The uncertainties of co-seismic off-set, either at cGPS stations or at campaign sites, are better than 5 - 6 mm on average. We compare three methods to constrain the co-seismic off-sets at the campaign sites in northeastern China 1 interpolating cGPS coseismic offsets, 2 estimating in terms of sparsely sampled time-series, and 3 predicting by using a well-constrained slip model. We show that the interpolation of cGPS co-seismic off-sets onto the campaign sites yield the best co-seismic off-set solution for these sites. The source model gives a consistent prediction based on finite dislocation in a layered spherical Earth, which agrees with the best prediction with discrepancies of 2 - 10 mm for 32 campaign sites. Thus, the co-seismic off-set model prediction is still a reasonable choice if a good coverage cGPS network is not available for a very active region like the Tibetan Plateau in which numerous campaign GPS sites were displaced by the recent large earthquakes.

  11. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  12. An Offshore Geophysical Network in the Pacific Northwest for Earthquake and Tsunami Early Warning and Hazard Research

    Science.gov (United States)

    Wilcock, W. S. D.; Schmidt, D. A.; Vidale, J. E.; Harrington, M.; Bodin, P.; Cram, G.; Delaney, J. R.; Gonzalez, F. I.; Kelley, D. S.; LeVeque, R. J.; Manalang, D.; McGuire, C.; Roland, E. C.; Tilley, J.; Vogl, C. J.; Stoermer, M.

    2016-12-01

    The Cascadia subduction zone hosts catastrophic earthquakes every few hundred years. On land, there are extensive geophysical networks available to monitor the subduction zone, but since the locked portion of the plate boundary lies mostly offshore, these networks are ideally complemented by seafloor observations. Such considerations helped motivate the development of scientific cabled observatories that cross the subduction zone at two sites off Vancouver Island and one off central Oregon, but these have a limited spatial footprint along the strike of the subduction zone. The Pacific Northwest Seismic Network is leading a collaborative effort to implement an earthquake early warning system in the Washington and Oregon using data streams from land networks as well as the few existing offshore instruments. For subduction zone earthquakes that initiate offshore, this system will provide a warning. However, the availability of real time offshore instrumentation along the entire subduction zone would improve its reliability and accuracy, add up to 15 s to the warning time, and ensure an early warning for coastal communities near the epicenter. Furthermore, real-time networks of seafloor pressure sensors above the subduction zone would enable monitoring and contribute to accurate predictions of the incoming tsunami. There is also strong scientific motivation for offshore monitoring. We lack a complete knowledge of the plate convergence rate and direction. Measurements of steady deformation and observations of transient processes such as fluid pulsing, microseismic cycles, tremor and slow-slip are necessary for assessing the dimensions of the locked zone and its along-strike segmentation. Long-term monitoring will also provide baseline observations that can be used to detect and evaluate changes in the subduction environment. There are significant engineering challenges to be solved to ensure the system is sufficiently reliable and maintainable. It must provide

  13. Inferences on the source mechanisms of the 1930 Irpinia (Southern Italy earthquake from simulations of the kinematic rupture process

    Directory of Open Access Journals (Sweden)

    A. Gorini

    2004-06-01

    Full Text Available We examine here a number of parameters that define the source of the earthquake that occurred on 23rd July 1930 in Southern Italy (in the Irpinia region. Starting from the source models proposed in different studies, we have simulated the acceleration field for each hypothesized model, and compared it with the macroseismic data. We then used the hybrid stochastic-deterministic technique proposed by Zollo et al. (1997 for the simulation of the ground motion associated with the rupture of an extended fault. The accelerations simulated for several sites were associated with the intensities using the empirical relationship proposed by Trifunac and Brady (1975, before being compared with the available data from the macroseismic catalogue. A good reproduction of the macroseismic field is provided by a normal fault striking in Apenninic direction (approximately NW-SE and dipping 55° toward the SW.

  14. Three-Dimensional Finite Difference Simulation of Ground Motions from the August 24, 2014 South Napa Earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Arthur J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Univ. of California, Berkeley, CA (United States); Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Pitarka, Arben [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-15

    We performed three-dimensional (3D) anelastic ground motion simulations of the South Napa earthquake to investigate the performance of different finite rupture models and the effects of 3D structure on the observed wavefield. We considered rupture models reported by Dreger et al. (2015), Ji et al., (2015), Wei et al. (2015) and Melgar et al. (2015). We used the SW4 anelastic finite difference code developed at Lawrence Livermore National Laboratory (Petersson and Sjogreen, 2013) and distributed by the Computational Infrastructure for Geodynamics. This code can compute the seismic response for fully 3D sub-surface models, including surface topography and linear anelasticity. We use the 3D geologic/seismic model of the San Francisco Bay Area developed by the United States Geological Survey (Aagaard et al., 2008, 2010). Evaluation of earlier versions of this model indicated that the structure can reproduce main features of observed waveforms from moderate earthquakes (Rodgers et al., 2008; Kim et al., 2010). Simulations were performed for a domain covering local distances (< 25 km) and resolution providing simulated ground motions valid to 1 Hz.

  15. SELANSI: a toolbox for simulation of stochastic gene regulatory networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2018-03-01

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding Chemical Master Equation with a partial integral differential equation that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es. © The Author(s) 2017. Published by Oxford University Press.

  16. Mass-spring model used to simulate the sloshing of fluid in the container under the earthquake

    International Nuclear Information System (INIS)

    Wen Jing; Luan Lin; Gao Xiaoan; Wang Wei; Lu Daogang; Zhang Shuangwang

    2005-01-01

    A lumped-mass spring model is given to simulated the sloshing of liquid in the container under the earthquake in the ASCE 4-86. A new mass-spring model is developed in the 3D finite element model instead of beam model in this paper. The stresses corresponding to the sloshing mass could be given directly, which avoids the construction of beam model. This paper presents 3-D Mass-Spring Model for the total overturning moment as well as an example of the model. Moreover the mass-spring models for the overturning moment to the sides and to the bottom of the container are constructed respectively. (authors)

  17. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  18. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    Science.gov (United States)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the

  19. Consistent earthquake catalog derived from changing network configurations: Application to the Rawil Depression in the southwestern Helvetic Alps

    Science.gov (United States)

    Lee, Timothy; Diehl, Tobias; Kissling, Edi; Wiemer, Stefan

    2017-04-01

    Earthquake catalogs derived from several decades of observations are often biased by network geometries, location procedures, and data quality changing with time. To study the long-term spatio-temporal behavior of seismogenic fault zones at high-resolution, a consistent homogenization and improvement of earthquake catalogs is required. Assuming that data quality and network density generally improves with time, procedures are needed, which use the best available data to homogeneously solve the coupled hypocenter - velocity structure problem and can be as well applied to earlier network configurations in the same region. A common approach to uniformly relocate earthquake catalogs is the calculation of a so-called "minimum 1D" model, which is derived from the simultaneous inversion for hypocenters and 1D velocity structure, including station specific delay-time corrections. In this work, we will present strategies using the principles of the "minimum 1D" model to consistently relocate hypocenters recorded by the Swiss Seismological Service (SED) in the Swiss Alps over a period of 17 years in a region, which is characterized by significant changes in network configurations. The target region of this study is the Rawil depression, which is located between the Aar and Mont Blanc massifs in southwestern Switzerland. The Rhone-Simplon Fault is located to the south of the Rawil depression and is considered as a dextral strike-slip fault representing the dominant tectonic boundary between Helvetic nappes to the north and Penninic nappes to the south. Current strike-slip earthquakes, however, occur predominantly in a narrow, east-west striking cluster located in the Rawil depression north of the Rhone-Simplon Fault. Recent earthquake swarms near Sion and Sierre in 2011 and 2016, on the other hand, indicate seismically active dextral faults close to the Rhone valley. The region north and south of the Rhone-Simplon Fault is one of the most seismically active regions in

  20. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  2. Simulated annealing for tensor network states

    International Nuclear Information System (INIS)

    Iblisdir, S

    2014-01-01

    Markov chains for probability distributions related to matrix product states and one-dimensional Hamiltonians are introduced. With appropriate ‘inverse temperature’ schedules, these chains can be combined into a simulated annealing scheme for ground states of such Hamiltonians. Numerical experiments suggest that a linear, i.e., fast, schedule is possible in non-trivial cases. A natural extension of these chains to two-dimensional settings is next presented and tested. The obtained results compare well with Euclidean evolution. The proposed Markov chains are easy to implement and are inherently sign problem free (even for fermionic degrees of freedom). (paper)

  3. Design and investigation of a continuous radon monitoring network for earthquake precursory process in Great Tehran

    International Nuclear Information System (INIS)

    Negarestani, A.; Namvaran, M.; Hashemi, S.M.; Shahpasandzadeh, M.; Fatemi, S.J.; Alavi, S.A.; Mokhtari, M.

    2014-01-01

    Earthquakes usually occur after some preliminary anomalies in the physical and chemical characteristics of environment and earth interior. Construction of the models which can explain these anomalies, prompt scientists to monitor geophysical and geochemical characteristics in the seismic areas for earthquake prediction. A review of studies has been done so far, denoted that radon gas shows more sensitivity than other geo-gas as a precursor. Based on previous researches, radon is a short-term precursor of earthquake from time point of view. There are equal experimental equations about the relation between earthquake magnitude and its effective distance on radon concentration variations. In this work, an algorithm based on Dobrovolsky equation (D=10 0.43M ) with defining the Expectation and Investigation circles for great Tehran has been used. Radon concentration was measured with RAD7 detector in the more than 40 springs. Concentration of radon in spring, spring discharge, water temperature and the closeness of spring location to active faults, have been considered as the significant factors to select the best spring for a radon continuous monitoring site implementation. According to these factors, thirteen springs have been selected as follow: Bayjan, Mahallat-Hotel, Avaj, Aala, Larijan, Delir, Lavij, Ramsar, Semnan, Lavieh, Legahi, Kooteh-Koomeh and Sarein. (author)

  4. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include the distri......As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  5. Computer simulation of randomly cross-linked polymer networks

    International Nuclear Information System (INIS)

    Williams, Timothy Philip

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneities were observed in the swollen model networks and were analysed by considering constituent substructures of varying size. The network connectivity determined the length scales at which the majority of the substructure unfolding process occurred. Simulated stress-strain curves and diffraction patterns for uniaxially deformed swollen networks, were found to be consistent with experimental findings. Analysis of the relaxation dynamics of various network components revealed a dramatic slowdown due to the network connectivity. The cross-link junction spatial fluctuations for networks close to the sol-gel threshold, were observed to be at least comparable with the phantom network prediction. The dangling chain ends were found to display the largest characteristic relaxation time. (author)

  6. Tsunami Simulations in the Western Makran Using Hypothetical Heterogeneous Source Models from World's Great Earthquakes

    Science.gov (United States)

    Rashidi, Amin; Shomali, Zaher Hossein; Keshavarz Farajkhah, Nasser

    2018-03-01

    The western segment of Makran subduction zone is characterized with almost no major seismicity and no large earthquake for several centuries. A possible episode for this behavior is that this segment is currently locked accumulating energy to generate possible great future earthquakes. Taking into account this assumption, a hypothetical rupture area is considered in the western Makran to set different tsunamigenic scenarios. Slip distribution models of four recent tsunamigenic earthquakes, i.e. 2015 Chile M w 8.3, 2011 Tohoku-Oki M w 9.0 (using two different scenarios) and 2006 Kuril Islands M w 8.3, are scaled into the rupture area in the western Makran zone. The numerical modeling is performed to evaluate near-field and far-field tsunami hazards. Heterogeneity in slip distribution results in higher tsunami amplitudes. However, its effect reduces from local tsunamis to regional and distant tsunamis. Among all considered scenarios for the western Makran, only a similar tsunamigenic earthquake to the 2011 Tohoku-Oki event can re-produce a significant far-field tsunami and is considered as the worst case scenario. The potential of a tsunamigenic source is dominated by the degree of slip heterogeneity and the location of greatest slip on the rupture area. For the scenarios with similar slip patterns, the mean slip controls their relative power. Our conclusions also indicate that along the entire Makran coasts, the southeastern coast of Iran is the most vulnerable area subjected to tsunami hazard.

  7. Simulating dam - breach flood scenarios of the Tangjiashan landslide dam induced by the Wenchuan earthquake

    NARCIS (Netherlands)

    Fan, Xuanmei; Tang, C.; van Westen, C.J.; Alkema, D.

    2012-01-01

    Floods from failures of landslide dams can pose a hazard to people and property downstream, which have to be rapidly assessed and mitigated in order to reduce the potential risk. The Tangjiashan landslide dam induced by the Mw=7.9 2008 Wenchuan earthquake had impounded the largest lake in the

  8. Simulation of ecosystem service responses to multiple disturbances from an earthquake and several typhoons

    NARCIS (Netherlands)

    Chiang, L-C; Lin, Y-P; Schmeller, D.S.; Verburg, P.H.; Liu, Y.L.; Ding, T-S.

    2014-01-01

    Ongoing environmental disturbances (e.g., climate variation and anthropogenic activities) alter an ecosystem gradually over time. Sudden large disturbances (e.g., typhoons and earthquakes) can have a significant and immediate impact on landscapes and ecosystem services. This study explored how

  9. Crowdsourced earthquake early warning

    Science.gov (United States)

    Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.

    2015-01-01

    Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.

  10. Numerical simulation for gas-liquid two-phase flow in pipe networks

    International Nuclear Information System (INIS)

    Li Xiaoyan; Kuang Bo; Zhou Guoliang; Xu Jijun

    1998-01-01

    The complex pipe network characters can not directly presented in single phase flow, gas-liquid two phase flow pressure drop and void rate change model. Apply fluid network theory and computer numerical simulation technology to phase flow pipe networks carried out simulate and compute. Simulate result shows that flow resistance distribution is non-linear in two phase pipe network

  11. A Neural Network Model for Dynamics Simulation | Bholoa ...

    African Journals Online (AJOL)

    University of Mauritius Research Journal. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 15, No 1 (2009) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. A Neural Network Model for Dynamics Simulation.

  12. Fracture Network Modeling and GoldSim Simulation Support

    OpenAIRE

    杉田 健一郎; Dershowiz, W.

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  13. Distributed simulation using a real-time shared memory network

    Science.gov (United States)

    Simon, Donald L.; Mattern, Duane L.; Wong, Edmond; Musgrave, Jeffrey L.

    1993-01-01

    The Advanced Control Technology Branch of the NASA Lewis Research Center performs research in the area of advanced digital controls for aeronautic and space propulsion systems. This work requires the real-time implementation of both control software and complex dynamical models of the propulsion system. We are implementing these systems in a distributed, multi-vendor computer environment. Therefore, a need exists for real-time communication and synchronization between the distributed multi-vendor computers. A shared memory network is a potential solution which offers several advantages over other real-time communication approaches. A candidate shared memory network was tested for basic performance. The shared memory network was then used to implement a distributed simulation of a ramjet engine. The accuracy and execution time of the distributed simulation was measured and compared to the performance of the non-partitioned simulation. The ease of partitioning the simulation, the minimal time required to develop for communication between the processors and the resulting execution time all indicate that the shared memory network is a real-time communication technique worthy of serious consideration.

  14. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available includes an extensive library of applications that can be simulated, including Facebook and Youtube, which is updated and maintained on a regular basis [17]. A Markov chain approach is used to increase the level of realism in user simulation and dynamic... Communications Conference, 2008. MILCOM 2008. IEEE, 2008. [10] “OPNET Modeler: Scalable Network Simulation,” OPNET, [Online]. Available: http://www.opnet.com/. [Accessed 10 July 2013]. [11] “ns-3 Project,” ns-3, 2012. [Online]. Available: http...

  15. FDM simulation of earthquakes off western Kyushu, Japan, using a land-ocean unified 3D structure model

    Science.gov (United States)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Hara, Tatsuhiko

    2017-07-01

    Seismic activity occurred off western Kyushu, Japan, at the northern end of the Okinawa Trough on May 6, 2016 (14:11 JST), 22 days after the onset of the 2016 Kumamoto earthquake sequence. The area is adjacent to the Beppu-Shimabara graben where the 2016 Kumamoto earthquake sequence occurred. In the area off western Kyushu, a M7.1 earthquake also occurred on November 14, 2015 (5:51 JST), and a tsunami with a height of 0.3 m was observed. In order to better understand these seismic activity and tsunamis, it is necessary to study the sources of, and strong motions due to, earthquakes in the area off western Kyushu. For such studies, validation of synthetic waveforms is important because of the presence of the oceanic water layer and thick sediments in the source area. We show the validation results for synthetic waveforms through nonlinear inversion analyses of small earthquakes ( M5). We use a land-ocean unified 3D structure model, 3D HOT finite-difference method ("HOT" stands for Heterogeneity, Ocean layer and Topography) and a multi-graphic processing unit (GPU) acceleration to simulate the wave propagations. We estimate the first-motion augmented moment tensor (FAMT) solution based on both the long-period surface waves and short-period body waves. The FAMT solutions systematically shift landward by about 13 km, on average, from the epicenters determined by the Japan Meteorological Agency. The synthetics provide good reproductions of the observed full waveforms with periods of 10 s or longer. On the other hand, for waveforms with shorter periods (down to 4 s), the later surface waves are not reproduced well, while the first parts of the waveforms (comprising P- and S-waves) are reproduced to some extent. These results indicate that the current 3D structure model around Kyushu is effective for generating full waveforms, including surface waves with periods of about 10 s or longer. Based on these findings, we analyze the 2015 M7.1 event using the cross

  16. The effect of spatially heterogeneous damage in simple models of earthquake fault networks

    Science.gov (United States)

    Tiampo, K. F.; Dominguez, R.; Klein, W.; Serino, C.; Kazemian, J.

    2011-12-01

    Natural earthquake fault systems are highly heterogeneous in space; inhomogeneities occur because the earth is made of a variety of materials of different strengths and dissipate stress differently. Because the spatial arrangement of these materials is dependent on the geologic history, the spatial distribution of these various materials can be quite complex and occur over a variety of length scales. One way that the inhomogeneous nature of fault systems manifests itself is in the spatial patterns which emerge in seismicity (Tiampo et al., 2002, 2007). Despite their inhomogeneous nature, real faults are often modeled as spatially homogeneous systems. One argument for this approach is that earthquake faults experience long range stress transfer, and if this range is longer than the length scales associated with the inhomogeneities of the system, the dynamics of the system may be unaffected by the inhomogeneities. However, it is not clear that this is always the case. In this work we study the scaling of earthquake models that are variations of Olami-Feder-Christensen (OFC) and Burridge-Knopoff (BK) models, in order to explore the effect of spatial inhomogeneities on earthquake-like systems when interaction ranges are long, but not necessarily longer than the distances associated with the inhomogeneities of the system (Burridge and L. Knopoff, 1967; Rundle and Jackson, 1977; Olami et al., 1988). For long ranges and without inhomogeneities, such models have been found to produce scaling similar to GR scaling found in real earthquake systems (Rundle and Klein, 1993). In the earthquake models discussed here, damage is distributed inhomogeneously throughout and the interaction ranges, while long, are not longer than all of the damage length scales. In addition, we attempt to model the effect of a fixed distribution of asperities, and find that this has an effect on the magnitude-frequency relation, producing larger events at regular intervals, We find that the scaling

  17. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  18. Overview of Ground-Motion Issues for Cascadia Megathrust Events: Simulation of Ground-Motions and Earthquake Site Response

    Directory of Open Access Journals (Sweden)

    Hadi Ghofrani

    2017-09-01

    Full Text Available Ground motions for earthquakes of M7.5 to 9.0 on the Cascadia subduction interface are simulated based on a stochastic finite-fault model and used to estimate average response spectra for reference firm soil conditions. The simulations are first validated by modeling the wealth of ground-motion data from the 2011 M9.0 Tohoku earthquake of Japan. Adjustments to the calibrated model are then made to consider average source, attenuation and site parameters for the Cascadia region. This includes an evaluation of the likely variability in stress drop for large interface earthquakes and an assessment of regional attenuation and site effects. We perform best-estimate simulations for a preferred set of input parameters. Typical results suggest mean values of 5%-damped pseudoacceleration in the range from about 100 to 200 cm/s2, at frequencies from 1 to 4 Hz, for firm-ground conditions in Vancouver. Uncertainty in most-likely value of the parameter representing stress drop causes variability in simulated response spectra of about ±50%. Uncertainties in the attenuation model produce even larger variability in response spectral amplitudes—a factor of about two at a closest distance to the rupture plane (Rcd of 100 km, becoming even larger at greater distances. It is thus important to establish the regional attenuation model for ground-motion simulations and to bound the source properties controlling radiation of ground motion. We calculate theoretical one-dimensional spectral amplification estimates for four selected Fraser River Delta sites to show how the presence of softer sediments in the region may alter the predicted ground motions. The amplification functions are largely consistent with observed spectral amplification at Fraser River delta sites, suggesting amplification by factors of 2.5–5 at the peak frequency of the site; we note that deep sites in the delta have a low peak frequency, ∼0.3 Hz. This work will aid in seismic hazard

  19. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  20. Altered default mode network configuration in posttraumatic stress disorder after earthquake: A resting-stage functional magnetic resonance imaging study.

    Science.gov (United States)

    Zhang, Xiao-Dong; Yin, Yan; Hu, Xiao-Lei; Duan, Lian; Qi, Rongfeng; Xu, Qiang; Lu, Guang-Ming; Li, Ling-Jiang

    2017-09-01

    The neural substrates of posttraumatic stress disorder (PTSD) are still not fully elucidated. Hence, this study is to explore topological alterations of the default mode network (DMN) in victims with PTSD after a magnitude of 8.0 earthquake using resting-state functional magnetic resonance imaging (rs-fMRI).This study was approved by the local ethical review board, and all participants signed written informed consent. Sixty-two PTSD victims from the 2008 Sichuan earthquake and 62 matched exposed controls underwent rs-fMRI. PTSD was diagnosed by Clinician-Administered PTSD Scale, and underwent PTSD Checklist-Civilian Version for symptom scoring. The DMN was analyzed by using graph theoretical approaches. Further, Pearson correlation analysis was performed to correlate neuroimaging metrics to neuropsychological scores in victims with PTSD.Victims with PTSD showed decreased DMN functional connectivity strength between the right superior frontal gyrus and left inferior parietal lobule (IPL), and showed increased functional connectivity between the right IPL and precuneus or left posterior cingulate cortex. It was also found that victims with PTSD exhibited decreased nodal efficiency in right superior frontal gyrus and precuneus, and increased nodal efficiency in right hippocampus/parahippocampus. Apart from that, PTSD showed higher nodal degree in bilateral hippocampus/parahippocampus. In addition, the functional connectivity strength between the right IPL and precuneus correlated negatively to the avoid scores (r = -0.26, P = .04).This study implicates alteration of topological features on the DMN in PTSD victims after major earthquake, and provides new insights into DMN malfunction in PTSD based on graph theory.

  1. ESIM_DSN Web-Enabled Distributed Simulation Network

    Science.gov (United States)

    Bedrossian, Nazareth; Novotny, John

    2002-01-01

    In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.

  2. Conservative parallel simulation of priority class queueing networks

    Science.gov (United States)

    Nicol, David

    1992-01-01

    A conservative synchronization protocol is described for the parallel simulation of queueing networks having C job priority classes, where a job's class is fixed. This problem has long vexed designers of conservative synchronization protocols because of its seemingly poor ability to compute lookahead: the time of the next departure. For, a job in service having low priority can be preempted at any time by an arrival having higher priority and an arbitrarily small service time. The solution is to skew the event generation activity so that the events for higher priority jobs are generated farther ahead in simulated time than lower priority jobs. Thus, when a lower priority job enters service for the first time, all the higher priority jobs that may preempt it are already known and the job's departure time can be exactly predicted. Finally, the protocol was analyzed and it was demonstrated that good performance can be expected on the simulation of large queueing networks.

  3. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  4. Development of a heterogeneous microwave network fade simulation tool applicable to networks that span Europe

    Science.gov (United States)

    Paulson, Kevin S.; Basarudin, Hafiz

    2011-08-01

    Several research groups in Europe are developing joint channel simulators for arbitrarily complex networks of terrestrial and slant path, microwave telecommunications links. Currently, the Hull Rain Fade Network Simulator (HRFNS) developed at University of Hull can simulate rain fade on arbitrary terrestrial networks in the southern United Kingdom, producing joint rain fade time series with a 10 s integration time. This paper reports on work to broaden the function of the existing HRFNS to include slant paths such as Earth-space links and communications to high altitude platforms and unmanned airborne systems. The area of application of the new simulation tool is being extended to the whole of Europe, and other fade mechanisms are being included. Nimrod/OPERA has been chosen as the input meteorological data sets for the new system to simulate rain fade. Zero-degree isotherm heights taken from NCEP/NCAR Reanalysis Data are used in conjunction with the Eden-Bacon sleet (wet snow) model to introduce melting layer effects. Other fading mechanisms, including cloud fade, scintillation and absorption losses by atmospheric gasses, can be added to the simulator. The simulator is tested against ITU-R models for rain fade distribution experienced by terrestrial and Earth-space links in the southern United Kingdom. Statistics of fade dynamics, i.e., fade slope and fade duration, for a simulated Earth-space link are compared to International Telecommunication Union models.

  5. Molecular Simulations of Actomyosin Network Self-Assembly and Remodeling

    Science.gov (United States)

    Komianos, James; Popov, Konstantin; Papoian, Garegin; Papoian Lab Team

    Actomyosin networks are an integral part of the cytoskeleton of eukaryotic cells and play an essential role in determining cellular shape and movement. Actomyosin network growth and remodeling in vivo is based on a large number of chemical and mechanical processes, which are mutually coupled and spatially and temporally resolved. To investigate the fundamental principles behind the self-organization of these networks, we have developed a detailed mechanochemical, stochastic model of actin filament growth dynamics, at a single-molecule resolution, where the nonlinear mechanical rigidity of filaments and their corresponding deformations under internally and externally generated forces are taken into account. Our work sheds light on the interplay between the chemical and mechanical processes governing the cytoskeletal dynamics, and also highlights the importance of diffusional and active transport phenomena. Our simulations reveal how different actomyosin micro-architectures emerge in response to varying the network composition. Support from NSF Grant CHE-1363081.

  6. Analyses on various parameters for the simulation of three-dimensional earthquake ground motions

    International Nuclear Information System (INIS)

    Watabe, M.; Tohdo, M.

    1979-01-01

    In these analyses, stochastic tools are extensively utilized with hundreds of strong motion accelerograms obtained in both Japan and the United States. Stochastic correlations between the maxima of the earthquake ground motions, such as maximum acceleration, velocity, displacement and spectral intensity, are introduced in the first part. Some equations of correlating such maxima to focal distance and magnitude of earthquake are also introduced. Then, discussions on the meaning of effective peak acceleration in view of practical engineering purposes are mentioned. A new concept of deterministic intensity function derived from mathematical models through the check of Run and Chi Square is introduced in the middle part. With this concept, deterministic intensity function for horizontal component as well as vertical, are obtained and shown. The relation between duration time and magnitude is also introduced here. (orig.)

  7. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  8. 3D Dynamic Rupture Simulation Across a Complex Fault System: the Mw7.0, 2010, Haiti Earthquake

    Science.gov (United States)

    Douilly, R.; Aochi, H.; Calais, E.; Freed, A. M.

    2013-12-01

    Earthquakes ruptures sometimes take place on a secondary fault and surprisingly do not activate an adjacent major one. The 1989 Loma Prieta earthquake is a classic case where rupture occurred on a blind thrust while the adjacent San Andreas Fault was not triggered during the process. Similar to Loma Prieta, the Mw7.0, January 12 2010, Haiti earthquake also ruptured a secondary blind thrust, the Léogâne fault, adjacent to the main plate boundary, the Enriquillo Plantain Garden Fault, which did not rupture during this event. Aftershock relocalizations delineate the Léogâne rupture with two north dipping segments with slightly different dip, where the easternmost segment had mostly dip-slip motion and the westernmost one had mostly strike-slip motion. In addition, an offshore south dipping structure inferred from the aftershocks to the west of the rupture zone coincides with the offshore Trois Baies reverse fault, a region of increase in Coulomb stress increase. In this study, we investigate the rupture dynamics of the Haiti earthquake in a complex fault system of multiple segments identified by the aftershock relocations. We suppose a background stress regime that is consistent with the type of motion of each fault and with the regional tectonic regime. We initiate a nucleation on the east segment of the Léogâne fault by defining a circular region with a 2 km radius where shear stress is slightly greater than the yield stress. By varying friction on faults and background stress, we find a range of plausible scenarios. In the absence of near-field seismic records of the event, we score the different models against the static deformation field derived from GPS and InSAR at the surface. All the plausible simulations show that the rupture propagates from the eastern to the western segment along the Léogâne fault, but not on the Enriquillo fault nor on the Trois Baies fault. The best-fit simulation shows a significant increase of shear stresses on the Trois Baies

  9. Simulation of surface displacement and strain field of the 2011 Japan Mw9.0 earthquake

    Directory of Open Access Journals (Sweden)

    Chen Shujun

    2011-11-01

    Full Text Available Based on dislocation theory of Okada, we adopted a finite-element fault model inverted by Gavin Hayes from seismic data for the 2011 Japan Mw9.0 earthquake, and obtained the corresponding surface displacement and strain fields. The calculated displacement field is consistent with the observed GPS results in the trend of changes. Also the surface displacement and strain fields both show large variations in space.

  10. Reactor experiments, workshops, and human resource development education simulating the Great East Japan Earthquake

    International Nuclear Information System (INIS)

    Horiguchi, Tetsuo; Yamamoto, Tomosada

    2012-01-01

    Kinki University Atomic Energy Research Institute has been implementing a social education program such as reactor experiments and training sessions for junior and senior high school teachers since 1987, and in recent years, it has been implementing an education program for common citizens. However, the Great East Japan Earthquake has made it necessary to consider not only the dissemination of accurate knowledge, but also responding to the anxiety on nuclear power. This paper explains the contents of the social contribution activities and workshops conducted at Kinki University Atomic Energy Research Institute, after the Great East Japan Earthquake and the Fukushima Daiichi Nuclear Power Station accident. As the activities that are carried out in addition to training sessions, it introduces the implementation state of telephone consultation about nuclear power, and earthquake reconstruction assistance advisory at Kawamata Town, Date-gun, Fukushima Prefecture. As workshop support, it reports human resource development education in the nuclear field at the university, activities at the workshops for junior/senior high school teachers and general public, and questionnaire survey at the time of the workshops. (A.O.)

  11. Pre-seismic gravity anomalies before Linkou Ms6.4 earthquake by continuous gravity observation of Crustal Movement Observation Network of China

    Directory of Open Access Journals (Sweden)

    Xinsheng Wang

    2017-03-01

    Full Text Available A Ms6.4 earthquake occurred at Linkou country, Heilongjiang Province (44.8°N, 129.9°E on January 2, 2016 at a depth of 580 km. Pre-seismic gravity anomalies obtained at a 1 Hz sampling rate from Crustal Movement Observation Network of China (CMONOC are analyzed after the earthquake. The results show that: (1 different from previous studies, both pre-seismic amplitude perturbation and co-seismic amplitude perturbation are not critical inversely proportional to epicentral distance; (2 unlike shallow earthquake, the pre-seismic and co-seismic amplitude perturbation of gravity illustrate synchronous spatial variation characters with decrease of epicentral distance for Linkou earthquake. This may because Linkou earthquake is a deep earthquake and occurred in Pacific Plate subduction zone; (3 compared to basement and semi-basement, cave can provide a better observation environment for gPhone gravimeter to detect pre-seismic gravity anomalies.

  12. A precise hypocenter determination method using network correlation coefficients and its application to deep low-frequency earthquakes

    Science.gov (United States)

    Ohta, K.; Ide, S.

    2008-08-01

    A knowledge of the precise locations of deep low-frequency earthquakes (LFEs) along subduction zones is essential to be able to constrain the spatial extent of various slow earthquakes and the underlying physical processes. We have developed a hypocenter determination method that utilizes the summed cross-correlation coefficient over many stations, denoted a network correlation coefficient (NCC). The method consists of two parts: (1) an estimation of relative hypocenter locations for every pair of events by a grid search, and (2) a linear least squares inversion for self-consistent relative hypocenter locations for the initial centroid. We have applied this method to ten LFEs in the Tokai region, Japan. Statistically significant values of NCC indicate the relative locations for many pairs, which in turn determine the self-consistent locations. While the catalog depths are widely distributed, the relocated hypocenters fall within a 2-km depth range, which implies that LFEs in the Tokai region occur on the plate interface, similar to LFEs in western Shikoku.

  13. A three dimensional tsunami propagation simulation of the 2011 off Tohoku earthquake using an unstructured finite element model

    Science.gov (United States)

    Oishi, Y.; Piggott, M. D.; Maeda, T.; Nelson, R. B.; Gorman, G. J.; Kramer, S. C.; Collins, G. S.; Tsushima, H.; Furumura, T.

    2011-12-01

    Most present tsunami simulations are based on the two dimensional linear or non-linear long-wave equations which assume the wavelength of the disturbance is large in comparison to the ocean depth. On the other hand, a few recent studies have tried to solve the three dimensional (3D) Navier-Stokes equations employing high-performance computers and finite difference methods with orthogonal meshes to achieve more accurate tsunami generation and propagation (e.g., Saito and Furumura, 2009). The goal of our current research is to develop highly accurate tsunami simulation models which will make full use of current and future high-performance supercomputers and appropriate numerical methods for the purpose of contributing to tsunami disaster mitigation. In the present study, we adopt high-resolution unstructured meshes and solve the 3D Navier-Stokes equations using finite element methods. In this simulation we employ the Fluidity-ICOM (http://amcg.ese.ic.ac.uk/fluidity) simulation code, which is a multi-purpose finite element / control volume based CFD and ocean dynamics code. Fluidity-ICOM is optimized to run in parallel on supercomputers. Our complete fluid dynamical equations in three dimensions will lead to more accurate results than the conventional two dimensional equations. The unstructured meshes make it possible to accurately and efficiently represent complex coastlines and bathymetry due to their flexibility and their multi-scale resolution capabilities. This is the key advantage of the present model over existing 3D models based upon the finite difference method with uniform orthogonal meshes. Using our tsunami simulation model together with high-performance computers, we expect an accurate representation of tsunami generation due to seafloor deformation caused by earthquakes, propagation of the tsunami in the deep ocean with accurate tsunami dispersion properties, and amplification of the tsunami onshore due to the irregular shape of coastlines. To validate

  14. Hybrid neural network bushing model for vehicle dynamics simulation

    International Nuclear Information System (INIS)

    Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk

    2008-01-01

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  15. SIMULATION OF WIRELESS SENSOR NETWORK WITH HYBRID TOPOLOGY

    Directory of Open Access Journals (Sweden)

    J. Jaslin Deva Gifty

    2016-03-01

    Full Text Available The design of low rate Wireless Personal Area Network (WPAN by IEEE 802.15.4 standard has been developed to support lower data rates and low power consuming application. Zigbee Wireless Sensor Network (WSN works on the network and application layer in IEEE 802.15.4. Zigbee network can be configured in star, tree or mesh topology. The performance varies from topology to topology. The performance parameters such as network lifetime, energy consumption, throughput, delay in data delivery and sensor field coverage area varies depending on the network topology. In this paper, designing of hybrid topology by using two possible combinations such as star-tree and star-mesh is simulated to verify the communication reliability. This approach is to combine all the benefits of two network model. The parameters such as jitter, delay and throughput are measured for these scenarios. Further, MAC parameters impact such as beacon order (BO and super frame order (SO for low power consumption and high channel utilization, has been analysed for star, tree and mesh topology in beacon disable mode and beacon enable mode by varying CBR traffic loads.

  16. Image Recognition Techniques for Earthquake Early Warning

    Science.gov (United States)

    Boese, M.; Heaton, T. H.; Hauksson, E.

    2011-12-01

    When monitoring on his/her PC a map of seismic stations, whose colors scale with the real-time transmitted ground motions amplitudes observed in a dense seismic network, an experienced person will fairly easily recognize when and where an earthquake occurs. Using the maximum amplitudes at stations at close epicentral distances, he/she might even be able to roughly estimate the size of the event. From the number and distribution of stations turning 'red', the person might also be able to recognize the rupturing fault in a large earthquake (M>>7.0), and to estimate the rupture dimensions while the rupture is still developing. Following this concept, we are adopting techniques for automatic image recognition to provide earthquake early warning. We rapidly correlate a set of templates with real-time ground motion observations in a seismic network. If a 'suspicious' pattern of ground motion amplitudes is detected, the algorithm starts estimating the location of the earthquake and its magnitude. For large earthquakes the algorithm estimates finite source dimensions and the direction of rupture propagation. These predictions are continuously up-dated using the current 'image' of ground motion observations. A priori information, such as on the orientation of mayor faults, helps enhancing estimates in less dense networks. The approach will be demonstrated for multiple simulated and real events in California.

  17. Modeling and simulation of the USAVRE network and radiology operations

    Science.gov (United States)

    Martinez, Ralph; Bradford, Daniel Q.; Hatch, Jay; Sochan, John; Chimiak, William J.

    1998-07-01

    The U.S. Army Medical Command, lead by the Brooke Army Medical Center, has embarked on a visionary project. The U.S. Army Virtual Radiology Environment (USAVRE) is a CONUS-based network that connects all the Army's major medical centers and Regional Medical Commands (RMC). The purpose of the USAVRE is to improve the quality, access, and cost of radiology services in the Army via the use of state-of-the-art medical imaging, computer, and networking technologies. The USAVRE contains multimedia viewing workstations; database archive systems are based on a distributed computing environment using Common Object Request Broker Architecture (CORBA) middleware protocols. The underlying telecommunications network is an ATM-based backbone network that connects the RMC regional networks and PACS networks at medical centers and RMC clinics. This project is a collaborative effort between Army, university, and industry centers with expertise in teleradiology and Global PACS applications. This paper describes a model and simulation of the USAVRE for performance evaluation purposes. As a first step the results of a Technology Assessment and Requirements Analysis (TARA) -- an analysis of the workload in Army radiology departments, their equipment and their staffing. Using the TARA data and other workload information, we have developed a very detailed analysis of the workload and workflow patterns of our Medical Treatment Facilities. We are embarking on modeling and simulation strategies, which will form the foundation for the VRE network. The workload analysis is performed for each radiology modality in a RMC site. The workload consists of the number of examinations per modality, type of images per exam, number of images per exam, and size of images. The frequency for store and forward cases, second readings, and interactive consultation cases are also determined. These parameters are translated into the model described below. The model for the USAVRE is hierarchical in nature

  18. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichirou; Dershowitz, W.

    2005-01-01

    During Heisei-16, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the Mizunami Underground Research Laboratory (MIU), participation in Task 6 of the AEspoe Task Force on Modeling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU support during H-16 involved updating the H-15 FracMan discrete fracture network (DFN) models for the MIU shaft region, and developing improved simulation procedures. Updates to the conceptual model included incorporation of 'Step2' (2004) versions of the deterministic structures, and revision of background fractures to be consistent with conductive structure data from the DH-2 borehole. Golder developed improved simulation procedures for these models through the use of hybrid discrete fracture network (DFN), equivalent porous medium (EPM), and nested DFN/EPM approaches. For each of these models, procedures were documented for the entire modeling process including model implementation, MMP simulation, and shaft grouting simulation. Golder supported JNC participation in Task 6AB, 6D and 6E of the AEspoe Task Force on Modeling of Groundwater Flow and Transport during H-16. For Task 6AB, Golder developed a new technique to evaluate the role of grout in performance assessment time-scale transport. For Task 6D, Golder submitted a report of H-15 simulations to SKB. For Task 6E, Golder carried out safety assessment time-scale simulations at the block scale, using the Laplace Transform Galerkin method. During H-16, Golder supported JNC's Total System Performance Assessment (TSPA) strategy by developing technologies for the analysis of the use site characterization data in safety assessment. This approach will aid in the understanding of the use of site characterization to progressively reduce site characterization uncertainty. (author)

  19. Nucleation and arrest of slow slip earthquakes: mechanisms and nonlinear simulations using realistic fault geometries and heterogeneous medium properties

    Science.gov (United States)

    Alves da Silva Junior, J.; Frank, W.; Campillo, M.; Juanes, R.

    2017-12-01

    Current models for slow slip earthquakes (SSE) assume a simplified fault embedded on a homogeneous half-space. In these models SSE events nucleate on the transition from velocity strengthening (VS) to velocity weakening (VW) down dip from the trench and propagate towards the base of the seismogenic zone, where high normal effective stress is assumed to arrest slip. Here, we investigate SSE nucleation and arrest using quasi-static finite element simulations, with rate and state friction, on a domain with heterogeneous properties and realistic fault geometry. We use the fault geometry of the Guerrero Gap in the Cocos subduction zone, where SSE events occurs every 4 years, as a proxy for subduction zone. Our model is calibrated using surface displacements from GPS observations. We apply boundary conditions according to the plate convergence rate and impose a depth-dependent pore pressure on the fault. Our simulations indicate that the fault geometry and elastic properties of the medium play a key role in the arrest of SSE events at the base of the seismogenic zone. SSE arrest occurs due to aseismic deformations of the domain that result in areas with elevated effective stress. SSE nucleation occurs in the transition from VS to VW and propagates as a crack-like expansion with increased nucleation length prior to dynamic instability. Our simulations encompassing multiple seismic cycles indicate SSE interval times between 1 and 10 years and, importantly, a systematic increase of rupture area prior to dynamic instability, followed by a hiatus in the SSE occurrence. We hypothesize that these SSE characteristics, if confirmed by GPS observations in different subduction zones, can add to the understanding of nucleation of large earthquakes in the seismogenic zone.

  20. The Italian Project S2 - Task 4:Near-fault earthquake ground motion simulation in the Sulmona alluvial basin

    Science.gov (United States)

    Stupazzini, M.; Smerzini, C.; Cauzzi, C.; Faccioli, E.; Galadini, F.; Gori, S.

    2009-04-01

    Recently the Italian Department of Civil Protection (DPC), in cooperation with Istituto Nazionale di Geofisica e Vulcanologia (INGV) has promoted the 'S2' research project (http://nuovoprogettoesse2.stru.polimi.it/) aimed at the design, testing and application of an open-source code for seismic hazard assessment (SHA). The tool envisaged will likely differ in several important respects from an existing international initiative (Open SHA, Field et al., 2003). In particular, while "the OpenSHA collaboration model envisions scientists developing their own attenuation relationships and earthquake rupture forecasts, which they will deploy and maintain in their own systems", the main purpose of S2 project is to provide a flexible computational tool for SHA, primarily suited for the needs of DPC, which not necessarily are scientific needs. Within S2, a crucial issue is to make alternative approaches available to quantify the ground motion, with emphasis on the near field region. The SHA architecture envisaged will allow for the use of ground motion descriptions other than those yielded by empirical attenuation equations, for instance user generated motions provided by deterministic source and wave propagation simulations. In this contribution, after a brief presentation of Project S2, we intend to illustrate some preliminary 3D scenario simulations performed in the alluvial basin of Sulmona (Central Italy), as an example of the type of descriptions that can be handled in the future SHA architecture. In detail, we selected some seismogenic sources (from the DISS database), believed to be responsible for a number of destructive historical earthquakes, and derive from them a family of simplified geometrical and mechanical source models spanning across a reasonable range of parameters, so that the extent of the main uncertainties can be covered. Then, purely deterministic (for frequencies Journal of Seismology, 1, 237-251. Field, E.H., T.H. Jordan, and C.A. Cornell (2003

  1. Pre-seismic gravity anomalies before Linkou Ms6.4 earthquake by continuous gravity observation of Crustal Movement Observation Network of China

    OpenAIRE

    Wang, Xinsheng; Li, Honglei; Han, Yufei

    2017-01-01

    A Ms6.4 earthquake occurred at Linkou country, Heilongjiang Province (44.8°N, 129.9°E) on January 2, 2016 at a depth of 580 km. Pre-seismic gravity anomalies obtained at a 1 Hz sampling rate from Crustal Movement Observation Network of China (CMONOC) are analyzed after the earthquake. The results show that: (1) different from previous studies, both pre-seismic amplitude perturbation and co-seismic amplitude perturbation are not critical inversely proportional to epicentral distance; (2) unlik...

  2. Simulating market dynamics: interactions between consumer psychology and social networks.

    Science.gov (United States)

    Janssen, Marco A; Jager, Wander

    2003-01-01

    Markets can show different types of dynamics, from quiet markets dominated by one or a few products, to markets with continual penetration of new and reintroduced products. In a previous article we explored the dynamics of markets from a psychological perspective using a multi-agent simulation model. The main results indicated that the behavioral rules dominating the artificial consumer's decision making determine the resulting market dynamics, such as fashions, lock-in, and unstable renewal. Results also show the importance of psychological variables like social networks, preferences, and the need for identity to explain the dynamics of markets. In this article we extend this work in two directions. First, we will focus on a more systematic investigation of the effects of different network structures. The previous article was based on Watts and Strogatz's approach, which describes the small-world and clustering characteristics in networks. More recent research demonstrated that many large networks display a scale-free power-law distribution for node connectivity. In terms of market dynamics this may imply that a small proportion of consumers may have an exceptional influence on the consumptive behavior of others (hubs, or early adapters). We show that market dynamics is a self-organized property depending on the interaction between the agents' decision-making process (heuristics), the product characteristics (degree of satisfaction of unit of consumption, visibility), and the structure of interactions between agents (size of network and hubs in a social network).

  3. Design and Simulation Analysis for Integrated Vehicle Chassis-Network Control System Based on CAN Network

    Directory of Open Access Journals (Sweden)

    Wei Yu

    2016-01-01

    Full Text Available Due to the different functions of the system used in the vehicle chassis control, the hierarchical control strategy also leads to many kinds of the network topology structure. According to the hierarchical control principle, this research puts forward the integrated control strategy of the chassis based on supervision mechanism. The purpose is to consider how the integrated control architecture affects the control performance of the system after the intervention of CAN network. Based on the principle of hierarchical control and fuzzy control, a fuzzy controller is designed, which is used to monitor and coordinate the ESP, AFS, and ARS. And the IVC system is constructed with the upper supervisory controller and three subcontrol systems on the Simulink platform. The network topology structure of IVC is proposed, and the IVC communication matrix based on CAN network communication is designed. With the common sensors and the subcontrollers as the CAN network independent nodes, the network induced delay and packet loss rate on the system control performance are studied by simulation. The results show that the simulation method can be used for designing the communication network of the vehicle.

  4. Static and dynamic simulation of hydraulic networks with the DSNP simulation language

    International Nuclear Information System (INIS)

    Saphier, D.

    1978-01-01

    A special purpose language, the Dynamic Simulator for Nuclear Power plants (DSNP) was developed. This higher level language also includes elements for general purpose dynamic simulations. A description of DSNP is presented, and the appropriate statements used in simulating hydraulic components are described in detail. The basic equations and correlations used in DSNP modules representing the various hydraulic elements are also presented. Two examples of the simulation of hydraulic networks are given using a subset of the DSNP language. The first example is a simple hydraulic loop and demonstrates the simulation method, while the second is a more complicated double hydraulic loop and demonstrates the DSNP flexibility in developing and changing complex simulations. 7 refs

  5. Three-Dimensional Time Domain Simulation of Tsunami-Generated Electromagnetic Fields: Application to the 2011 Tohoku Earthquake Tsunami

    Science.gov (United States)

    Minami, Takuto; Toh, Hiroaki; Ichihara, Hiroshi; Kawashima, Issei

    2017-12-01

    We present a new finite element simulation approach in time domain for electromagnetic (EM) fields associated with motional induction by tsunamis. Our simulation method allows us to conduct three-dimensional simulation with realistic smooth bathymetry and to readily obtain broad structures of tsunami-generated EM fields and their time evolution, benefitting from time domain implementation with efficient unstructured mesh. Highly resolved mesh near observation sites enables us to compare simulation results with observed data and to investigate tsunami properties in terms of EM variations. Furthermore, it makes source separations available for EM data during tsunami events. We applied our simulation approach to the 2011 Tohoku tsunami event with seawater velocity from linear-long and linear-Boussinesq approximations. We revealed that inclusion of dispersion effect is necessary to explain magnetic variations at a northwest Pacific seafloor site, 1,500 km away from the epicenter, while linear-long approximation is enough at a seafloor site 200 km east-northeast of the epicenter. Our simulations provided, for the first time, comprehensive views of spatiotemporal structures of tsunami-generated EM fields for the 2011 Tohoku tsunami, including large-scale electric current circuits in the ocean. Finally, subtraction of the simulated magnetic fields from the observed data revealed symmetric magnetic variations on the western and eastern sides of the epicenter for 30 min since the earthquake origin time. These imply a pair of southward and northward electric currents in the ionosphere that exist on the western and eastern sides of the source region, respectively, which was likely to be caused by tsunami-generated atmospheric acoustic/gravity waves reaching the ionosphere.

  6. Image reconstruction using Monte Carlo simulation and artificial neural networks

    International Nuclear Information System (INIS)

    Emert, F.; Missimner, J.; Blass, W.; Rodriguez, A.

    1997-01-01

    PET data sets are subject to two types of distortions during acquisition: the imperfect response of the scanner and attenuation and scattering in the active distribution. In addition, the reconstruction of voxel images from the line projections composing a data set can introduce artifacts. Monte Carlo simulation provides a means for modeling the distortions and artificial neural networks a method for correcting for them as well as minimizing artifacts. (author) figs., tab., refs

  7. Simulating Geomagnetically Induced Currents in the Irish Power Network

    Science.gov (United States)

    Jones, A. G.; Blake, S. P.; Gallagher, P.; McCauley, J.; Hogg, C.; Beggan, C.; Thomson, A. W. P.; Kelly, G.; Walsh, S.

    2014-12-01

    Geomagnetic storms are known to cause geomagnetically induced currents (GICs) which can damage or destroy transformers on power grids. Previous studies have examined the vulnerability of power networks in countries such as the UK, New Zealand, Canada and South Africa. Here we describe the application of a British Geological Survey (BGS) thin-sheet conductivity model to compute the geo-electric field from the variation of the magnetic field, in order to better quantify the risk of space weather to Ireland's power network. This was achieved using DIAS magnetotelluric data from across Ireland. As part of a near-real-time warning package for Eirgrid (who oversee Ireland's transmission network), severe storm events such as the Halloween 2003 storm and the corresponding GIC flows at transformers are simulated.

  8. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  9. NCC Simulation Model: Simulating the operations of the network control center, phase 2

    Science.gov (United States)

    Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.

    1992-01-01

    The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.

  10. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.

  11. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  12. Source characteristics of moderate-to-strong earthquakes in the Nantou area, Taiwan: insight from strong ground motion simulations

    Science.gov (United States)

    Wen, Yi-Ying; Chao, Shen-Yu; Yen, Yin-Tung; Wen, Strong

    2017-09-01

    In Taiwan, the Nantou area is a seismically active region where several moderate events have occurred, causing some disasters during the past century. Here, we applied the strong ground motion simulation with the empirical Green's function method to investigate the source characteristics for the eight moderate blind-fault events that struck the Nantou area in 1999 and 2013. The results show that for these Nantou events, a high stress drop and focal depth dependence were noted, which might be related to the immature buried fault in this area. From the viewpoint of seismic hazard prevention and preparation, future earthquake scenarios that include high stress drop should be applied to more analyses, especially the moderate-to-large events originating from the immature blind faulting.[Figure not available: see fulltext.

  13. Extension of heuristics for simulating population overflow in Jackson tandem queuing networks to non-Markovian tandem queuing networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.; de Boer, Pieter-Tjerk; Haverkort, Boudewijn R.H.M.

    In this paper we extend previously proposed state-dependent importance sampling heuristics for simulation of population overflow in Markovian tandem queuing networks to non-Markovian tandem networks, and experimentally demonstrate the asymptotic efficiency of the resulting heuristics.

  14. Transforming network simulation data to semantic data for network attack planning

    CSIR Research Space (South Africa)

    Chan, Ke Fai Peter

    2017-03-01

    Full Text Available This research paper investigates a technique to transform network simulation data into linked data through the use of ontology models. By transforming the data, it allows one to use semantic reasoners to infer and reason additional insight. A case...

  15. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  16. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  17. An artifical neural network for detection of simulated dental caries

    International Nuclear Information System (INIS)

    Kositbowornchai, S.; Siriteptawee, S.; Plermkamon, S.; Bureerat, S.; Chetchotsak, D.

    2006-01-01

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  18. Real-Time-Simulation of IEEE-5-Bus Network on OPAL-RT-OP4510 Simulator

    Science.gov (United States)

    Atul Bhandakkar, Anjali; Mathew, Lini, Dr.

    2018-03-01

    The Real-Time Simulator tools have high computing technologies, improved performance. They are widely used for design and improvement of electrical systems. The advancement of the software tools like MATLAB/SIMULINK with its Real-Time Workshop (RTW) and Real-Time Windows Target (RTWT), real-time simulators are used extensively in many engineering fields, such as industry, education, and research institutions. OPAL-RT-OP4510 is a Real-Time Simulator which is used in both industry and academia. In this paper, the real-time simulation of IEEE-5-Bus network is carried out by means of OPAL-RT-OP4510 with CRO and other hardware. The performance of the network is observed with the introduction of fault at various locations. The waveforms of voltage, current, active and reactive power are observed in the MATLAB simulation environment and on the CRO. Also, Load Flow Analysis (LFA) of IEEE-5-Bus network is computed using MATLAB/Simulink power-gui load flow tool.

  19. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  20. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  1. Networks in disasters: Multidisciplinary communication and coordination in response and recovery to the 2010 Haiti Earthquake (Invited)

    Science.gov (United States)

    McAdoo, B. G.; Augenstein, J.; Comfort, L.; Huggins, L.; Krenitsky, N.; Scheinert, S.; Serrant, T.; Siciliano, M.; Stebbins, S.; Sweeney, P.; University Of Pittsburgh Haiti Reconnaissance Team

    2010-12-01

    The 12 January 2010 earthquake in Haiti demonstrates the necessity of understanding information communication between disciplines during disasters. Armed with data from a variety of sources, from geophysics to construction, water and sanitation to education, decision makers can initiate well-informed policies to reduce the risk from future hazards. At the core of this disaster was a natural hazard that occurred in an environmentally compromised country. The earthquake itself was not solely responsible for the magnitude of the disaster- poor construction practices precipitated by extreme poverty, a two centuries of post-colonial environmental degradation and a history of dysfunctional government shoulder much of the responsibility. Future policies must take into account the geophysical reality that future hazards are inevitable and may occur within the very near future, and how various institutions will respond to the stressors. As the global community comes together in reconstruction efforts, it is necessary for the various actors to take into account what vulnerabilities were exposed by the earthquake, most vividly seen during the initial response to the disaster. Responders are forced to prioritize resources designated for building collapse and infrastructure damage, delivery of critical services such as emergency medical care, and delivery of food and water to those in need. Past disasters have shown that communication lapses between the response and recovery phases results in many of the exposed vulnerabilities not being adequately addressed, and the recovery hence fails to bolster compromised systems. The response reflects the basic characteristics of a Complex Adaptive System, where new agents emerge and priorities within existing organizations shift to deal with new information. To better understand how information is shared between actors during this critical transition, we are documenting how information is communicated between critical sectors during the

  2. Assessment of earthquake-triggered landslide susceptibility in El Salvador based on an Artificial Neural Network model

    Directory of Open Access Journals (Sweden)

    M. J. García-Rodríguez

    2010-06-01

    Full Text Available This paper presents an approach for assessing earthquake-triggered landslide susceptibility using artificial neural networks (ANNs. The computational method used for the training process is a back-propagation learning algorithm. It is applied to El Salvador, one of the most seismically active regions in Central America, where the last severe destructive earthquakes occurred on 13 January 2001 (Mw 7.7 and 13 February 2001 (Mw 6.6. The first one triggered more than 600 landslides (including the most tragic, Las Colinas landslide and killed at least 844 people.

    The ANN is designed and programmed to develop landslide susceptibility analysis techniques at a regional scale. This approach uses an inventory of landslides and different parameters of slope instability: slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness. The information obtained from ANN is then used by a Geographic Information System (GIS to map the landslide susceptibility. In a previous work, a Logistic Regression (LR was analysed with the same parameters considered in the ANN as independent variables and the occurrence or non-occurrence of landslides as dependent variables. As a result, the logistic approach determined the importance of terrain roughness and soil type as key factors within the model. The results of the landslide susceptibility analysis with ANN are checked using landslide location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone. Finally, a comparative analysis of the ANN and LR models are made. The advantages and disadvantages of both approaches are discussed using Receiver Operating Characteristic (ROC curves.

  3. Simulation of co-seismic gravity change and deformation of Wenchuan Ms8. 0 earthquake

    Directory of Open Access Journals (Sweden)

    Chongyang Shen

    2010-01-01

    Full Text Available Surface co-seismic gravity changes and displacements caused by the Wenchuan Ms8. 0 earthquake are calculated on the basis of the half-space dislocation theory and two fault models inversed, respectively, by Institute of Geophysics, CEA and USGS. The results show that 1 the dislocation consists of dip slip and right-lateral strike slip; 2 the co-seismic gravity change shows a four-quadrant pattern, which is greatly controlled by the distribution of the vertical displacements, especially in the near-filed; 3 the gravity change is generally less than 10 × 10−8 ms−2 in the far-field, but as high as several 100 × 10−8 ms−2in the near-filed. These results basically agree with observational results.

  4. Analogue modelling of the rupture process of vulnerable stalagmites in an earthquake simulator

    Science.gov (United States)

    Gribovszki, Katalin; Bokelmann, Götz; Kovács, Károly; Hegymegi, Erika; Esterhazy, Sofi; Mónus, Péter

    2017-04-01

    Earthquakes hit urban centers in Europe infrequently, but occasionally with disastrous effects. Obtaining an unbiased view of seismic hazard is therefore very important. In principle, the best way to test Probabilistic Seismic Hazard Assessments (PSHA) is to compare them with observations that are entirely independent of the procedure used to produce PSHA models. Arguably, the most valuable information in this context should be information on long-term hazard, namely maximum intensities (or magnitudes) occurring over time intervals that are at least as long as a seismic cycle. Long-term information can in principle be gained from intact and vulnerable stalagmites in natural caves. These formations survived all earthquakes that have occurred, over thousands of years - depending on the age of the stalagmite. Their "survival" requires that the horizontal ground acceleration has never exceeded a certain critical value within that time period. To determine this critical value for the horizontal ground acceleration more precisely we need to understand the failure process of these intact and vulnerable stalagmites. More detailed information of the vulnerable stalagmites' rupture is required, and we have to know how much it depends on the shape and the substance of the investigated stalagmite. Predicting stalagmite failure limits using numerical modelling is faced with a number of approximations, e.g. from generating a manageable digital model. Thus it seemed reasonable to investigate the problem by analogue modelling as well. The advantage of analogue modelling among other things is that nearly real circumstances can be produced by simple and quick laboratory methods. The model sample bodies were made from different types of concrete and were cut out from real broken stalagmites originated from the investigated caves. These bodies were reduced-scaled with similar shape as the original, investigated stalagmites. During the measurements we could change both the shape and

  5. SIMULATION STUDY OF BLACKHOLE ATTACK IN THE MOBILE AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    SHEENU SHARMA

    2009-06-01

    Full Text Available A wireless ad hoc network is a temporary network set up by wireless nodes usually moving randomly and communicating without a network infrastructure. Due to security vulnerabilities of the routing protocols, however, wireless ad hoc networks may be unprotected against attacks by the malicious nodes. In this study we investigated the effects of Blackhole attacks on the network performance. We simulated Blackhole attacks in Qualnet Simulator and measured the packet loss in the network with and without a blackhole. The simulation is done on AODV (Ad hoc On Demand Distance Vector Routing Protocol. The network performance in the presence of a blackhole is reduced up to 26%.

  6. How Much Can the Total Aleatory Variability of Empirical Ground Motion Prediction Equations Be Reduced Using Physics-Based Earthquake Simulations?

    Science.gov (United States)

    Jordan, T. H.; Wang, F.; Graves, R. W.; Callaghan, S.; Olsen, K. B.; Cui, Y.; Milner, K. R.; Juve, G.; Vahi, K.; Yu, J.; Deelman, E.; Gill, D.; Maechling, P. J.

    2015-12-01

    Ground motion prediction equations (GMPEs) in common use predict the logarithmic intensity of ground shaking, lnY, as a deterministic value, lnYpred(x), conditioned on a set of explanatory variables x plus a normally distributed random variable with a standard deviation σT. The latter accounts for the unexplained variability in the ground motion data used to calibrate the GMPE and is typically 0.5-0.7 in natural log units. Reducing this residual or "aleatory" variability is a high priority for seismic hazard analysis, because the probabilities of exceedance at high Y values go up rapidly with σT. adding costs to the seismic design of critical facilities to account for the prediction uncertainty. However, attempts to decrease σT by incorporating more explanatory variables to the GMPEs have been largely unsuccessful (e.g., Strasser et al., SRL, 2009). An alternative is to employ physics-based earthquake simulations that properly account for source directivity, basin effects, directivity-basin coupling, and other 3D complexities. We have explored the theoretical limits of this approach through an analysis of large (> 108) ensembles of 3D synthetic seismograms generated for the Los Angeles region by SCEC's CyberShake project using the new tool of averaging-based factorization (ABF, Wang & Jordan, BSSA, 2014). The residual variance obtained by applying GMPEs to the CyberShake dataset matches the frequency-dependence of σT obtained for the GMPE calibration dataset. The ABF analysis allows us to partition this variance into uncorrelated components representing source, path, and site effects. We show that simulations can potentially reduce σT by about one-third, which could lower the exceedance probabilities for high hazard levels at fixed x by orders of magnitude. Realizing this gain in forecasting probability would have a broad impact on risk-reduction strategies, especially for critical facilities such as large dams, nuclear power plants, and energy transportation

  7. Adaptive hybrid simulations for multiscale stochastic reaction networks

    International Nuclear Information System (INIS)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-01

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest

  8. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  9. The MeSO-net (Metropolitan Seismic Observation network) confronts the Pacific Coast of Tohoku Earthquake, Japan (Mw 9.0)

    Science.gov (United States)

    Kasahara, K.; Nakagawa, S.; Sakai, S.; Nanjo, K.; Panayotopoulos, Y.; Morita, Y.; Tsuruoka, H.; Kurashimo, E.; Obara, K.; Hirata, N.; Aketagawa, T.; Kimura, H.

    2011-12-01

    On April 2007, we have launched the special project for earthquake disaster mitigation in the Tokyo Metropolitan area (Fiscal 2007-2011). As a part of this project, construction of the MeSO-net (Metropolitan Seismic Observation network) has been completed, with about 300 stations deployed at mainly elementary and junior-high schools with an interval of about 5 km in space. This results in a highly dense network that covers the metropolitan area. To achieve stable seismic observation with lower surface ground noise, relative to a measurement on the surface, sensors of all stations were installed in boreholes at a depth of about 20m. The sensors have a wide dynamic range (135dB) and a wide frequency band (DC to 80Hz). Data are digitized with 200Hz sampling and telemetered to the Earthquake Research Institute, University of Tokyo. The MeSO-net that can detect and locate most earthquakes with magnitudes above 2.5 provides a unique baseline in scientific and engineering researches on the Tokyo metropolitan area, as follows. One of the main contributions is to greatly improve the image of the Philippine Sea plate (PSP) (Nakagawa et al., 2010) and provides an accurate estimation of the plate boundaries between the PSP and the Pacific plate, allowing us to possibly discuss clear understanding of the relation between the PSP deformation and M7+ intra-slab earthquake generation. Also, the latest version of the plate model in the metropolitan area, proposed by our project, attracts various researchers, comparing with highly-accurate solutions of fault mechanism, repeating earthquakes, etc. Moreover, long-periods ground motions generated by the 2011 earthquake off the Pacific coast of Tohoku earthquake (Mw 9.0) were observed by the MeSO-net and analyzed to obtain the Array Back-Projection Imaging of this event (Honda et al., 2011). As a result, the overall pattern of the imaged asperities coincides well with the slip distribution determined based on other waveform inversion

  10. A Network Scheduling Model for Distributed Control Simulation

    Science.gov (United States)

    Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot

    2016-01-01

    Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.

  11. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA). MIU Underground Rock Laboratory support during H-14 involved discrete fracture network (DFN) modelling in support of the Multiple Modelling Project (MMP) and the Long Term Pumping Test (LPT). Golder developed updated DFN models for the MIU site, reflecting updated analyses of fracture data. Golder also developed scripts to support JNC simulations of flow and transport pathways within the MMP. Golder supported JNC participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-14. Task 6A and 6B compared safety assessment (PA) and experimental time scale simulations along a pipe transport pathway. Task 6B2 extended Task 6B simulations from 1-D to 2-D. For Task 6B2, Golder carried out single fracture transport simulations on a wide variety of generic heterogeneous 2D fractures using both experimental and safety assessment boundary conditions. The heterogeneous 2D fractures were implemented according to a variety of in plane heterogeneity patterns. Multiple immobile zones were considered including stagnant zones, infillings, altered wall rock, and intact rock. During H-14, JNC carried out extensive studies of the distributed rock zone (DRZ) surrounding repository tunnels and drifts. Golder supported this activity be evaluating the calculation time necessary for simulating a reference heterogeneous DRZ cell network for a range of computational strategies. To support the development of JNC's total system performance assessment (TSPA) strategy, Golder carried out a review of the US DOE Yucca Mountain Project TSPA. This

  12. Crowd-Sourced Global Earthquake Early Warning

    Science.gov (United States)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  13. Coarse-grained simulation of a real-time process control network under peak load

    International Nuclear Information System (INIS)

    George, A.D.; Clapp, N.E. Jr.

    1992-01-01

    This paper presents a simulation study on the real-time process control network proposed for the new ANS reactor system at ORNL. A background discussion is provided on networks, modeling, and simulation, followed by an overview of the ANS process control network, its three peak-load models, and the results of a series of coarse-grained simulation studies carried out on these models using implementations of 802.3, 802.4, and 802.5 standard local area networks

  14. Fracture network modeling and GoldSim simulation support

    International Nuclear Information System (INIS)

    Sugita, Kenichiro; Dershowitz, William

    2004-01-01

    During Heisei-15, Golder Associates provided support for JNC Tokai through discrete fracture network data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aespoe Task Force on Modelling of Groundwater Flow and Transport, and development of methodologies for analysis of repository site characterization strategies and safety assessment. MIU Underground Rock Laboratory support during H-15 involved development of new discrete fracture network (DFN) models for the MIU Shoba-sama Site, in the region of shaft development. Golder developed three DFN models for the site using discrete fracture network, equivalent porous medium (EPM), and nested DFN/EPM approaches. Each of these models were compared based upon criteria established for the multiple modeling project (MMP). Golder supported JNC participation in Task 6AB, 6D and 6E of the Aespoe Task Force on Modelling of Groundwater Flow and Transport during H-15. For Task 6AB, Golder implemented an updated microstructural model in GoldSim, and used this updated model to simulate the propagation of uncertainty from experimental to safety assessment time scales, for 5 m scale transport path lengths. Task 6D and 6E compared safety assessment (PA) and experimental time scale simulations in a 200 m scale discrete fracture network. For Task 6D, Golder implemented a DFN model using FracMan/PA Works, and determined the sensitivity of solute transport to a range of material property and geometric assumptions. For Task 6E, Golder carried out demonstration FracMan/PA Works transport calculations at a 1 million year time scale, to ensure that task specifications are realistic. The majority of work for Task 6E will be carried out during H-16. During H-15, Golder supported JNC's Total System Performance Assessment (TSPO) strategy by developing technologies for the analysis of precipitant concentration. These approaches were based on the GoldSim precipitant data management features, and were

  15. Simulation of heart rate variability model in a network

    Science.gov (United States)

    Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

    2017-07-01

    We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

  16. Simulation of strong ground motion parameters of the 1 June 2013 Gulf of Suez earthquake, Egypt

    Directory of Open Access Journals (Sweden)

    Mostafa Toni

    2017-06-01

    The results reveal that the highest values of PGA, PGV, and PGD are observed at Ras Gharib city (epicentral distance ∼ 11 km as 67 cm/s2, 2.53 cm/s, and 0.45 cm respectively for Zone A, and as 26.5 cm/s2, 1.0 cm/s, and 0.2 cm respectively for Zone B, while the lowest values of PGA, PGV, and PGD are observed at Suez city (epicentral distance ∼ 190 km as 3.0 cm/s2, 0.2 cm/s, and 0.05 cm/s respectively for Zone A, and as 1.3 cm/s2, 0.1 cm/s, and 0.024 cm respectively for Zone B. Also the highest PSA values are observed in Ras Gharib city as 200 cm/s2 and 78 cm/s2 for Zone A and Zone B respectively, while the lowest PSA values are observed in Suez city as 7 cm/s2 and 3 cm/s2 for Zone A and Zone B respectively. These results show a good agreement with the earthquake magnitude, epicentral distances, and site characterizations as well.

  17. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    Science.gov (United States)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (< 5%) for a wide range of inlet-to-initial pressure ratios. A Fast Fourier Transform is preformed on the pressure oscillations to predict the various modal frequencies of the pressure wave. The shutdown problem, i.e. valve closing problem, the simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  18. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  19. Simulating Real-Time Aspects of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Liang Yao

    2010-01-01

    Full Text Available Wireless Sensor Networks (WSNs technology has been mainly used in the applications with low-frequency sampling and little computational complexity. Recently, new classes of WSN-based applications with different characteristics are being considered, including process control, industrial automation and visual surveillance. Such new applications usually involve relatively heavy computations and also present real-time requirements as bounded end-to- end delay and guaranteed Quality of Service. It becomes then necessary to employ proper resource management policies, not only for communication resources but also jointly for computing resources, in the design and development of such WSN-based applications. In this context, simulation can play a critical role, together with analytical models, for validating a system design against the parameters of Quality of Service demanded for. In this paper, we present RTNS, a publicly available free simulation tool which includes Operating System aspects in wireless distributed applications. RTNS extends the well-known NS-2 simulator with models of the CPU, the Real-Time Operating System and the application tasks, to take into account delays due to the computation in addition to the communication. We demonstrate the benefits of RTNS by presenting our simulation study for a complex WSN-based multi-view vision system for real-time event detection.

  20. Determination of Design Basis Earthquake ground motion

    International Nuclear Information System (INIS)

    Kato, Muneaki

    1997-01-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  1. Determination of Design Basis Earthquake ground motion

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)

    1997-03-01

    This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)

  2. Coseismic displacement estimate of the 2013 M S7.0 Lushan, China earthquake based on the simulation of near-fault displacement field

    Directory of Open Access Journals (Sweden)

    Hong Zhou

    2016-11-01

    Full Text Available Abstract Usually, GPS observation provides direct evidence to estimate coseismic displacement. However, GPS stations are scattered, sparse and cannot provide a detailed distribution of coseismic displacement. Strong ground motion records share the same disadvantages as GPS in estimating coseismic displacement. Estimations from InSAR data can provide displacement distributions; however, the resolution of such methods is limited by the analysis techniques. The paper focuses on estimating the coseismic displacement of the M S7.0 Lushan earthquake on April 20, 2013 using a simulation of the wave field based on the elastic wave equation instead of a quasi-static equation. First, the media and source models were constructed by comparing the simulated velocity and the record velocity of the ground motion. Then simulated static displacements were compared with GPS records. Their agreement validates our results. Careful analysis of the distribution of simulated coseismic displacements near the fault reveals more details of the ground motion. For example, an uplift appears on the hanging wall of the fault, rotation is associated with the horizontal displacement, the fault strike and earthquake epicenter provide the main control on motion near the faults, and the motion on the hanging wall is stronger than that on the footwall. These results reveal additional characteristics of the ground motion of the Lushan earthquake.

  3. Wireless Power Transfer Protocols in Sensor Networks: Experiments and Simulations

    Directory of Open Access Journals (Sweden)

    Sotiris Nikoletseas

    2017-04-01

    Full Text Available Rapid technological advances in the domain of Wireless Power Transfer pave the way for novel methods for power management in systems of wireless devices, and recent research works have already started considering algorithmic solutions for tackling emerging problems. In this paper, we investigate the problem of efficient and balanced Wireless Power Transfer in Wireless Sensor Networks. We employ wireless chargers that replenish the energy of network nodes. We propose two protocols that configure the activity of the chargers. One protocol performs wireless charging focused on the charging efficiency, while the other aims at proper balance of the chargers’ residual energy. We conduct detailed experiments using real devices and we validate the experimental results via larger scale simulations. We observe that, in both the experimental evaluation and the evaluation through detailed simulations, both protocols achieve their main goals. The Charging Oriented protocol achieves good charging efficiency throughout the experiment, while the Energy Balancing protocol achieves a uniform distribution of energy within the chargers.

  4. Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles

    Directory of Open Access Journals (Sweden)

    Shah Imran

    2011-07-01

    Full Text Available Abstract Background With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate the physiological effect of chemicals, including potential toxicity. Here we investigate a biologically motivated model for estimating tissue level responses by aggregating the behavior of a cell population. We assume that the molecular state of individual cells is independently governed by discrete non-deterministic signaling mechanisms. This results in noisy but highly reproducible aggregate level responses that are consistent with experimental data. Results We developed an asynchronous threshold Boolean network simulation algorithm to model signal transduction in a single cell, and then used an ensemble of these models to estimate the aggregate response across a cell population. Using published data, we derived a putative crosstalk network involving growth factors and cytokines - i.e., Epidermal Growth Factor, Insulin, Insulin like Growth Factor Type 1, and Tumor Necrosis Factor α - to describe early signaling events in cell proliferation signal transduction. Reproducibility of the modeling technique across ensembles of Boolean networks representing cell populations is investigated. Furthermore, we compare our simulation results to experimental observations of hepatocytes reported in the literature. Conclusion A systematic analysis of the results following differential stimulation of this model by growth factors and cytokines suggests that: (a using Boolean network ensembles with asynchronous updating provides biologically plausible noisy individual cellular responses with reproducible mean behavior for large cell populations, and (b with sufficient data our model can estimate the response to different concentrations of extracellular ligands. Our

  5. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    Science.gov (United States)

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  6. A Simulation of Cooperation and Competition in Insurgent Networks

    Science.gov (United States)

    Gabbay, Michael

    2014-03-01

    Insurgencies are often characterized by multiple groups who share a common foe in the national government but have independent organizations which may differ with respect to social identities, ideologies, strategies, and their use of violence. These groups may cooperate in various ways such as conducting joint attacks, pooling resources, and establishing formal alliances or mergers. However, they may also compete with each other over popular support, recruitment of fighters, funding, allies, and ultimately military dominance. A network coevolution model of insurgent factional dynamics is presented which accounts for factors driving cooperation and competition. The model is formulated as a system of coupled ODEs which evolves network ties between insurgent groups along with group policies concerning the targets of violence. Simulation results are presented showing sharp transitions in network structure as model parameters are varied. Connections are drawn between the model results and empirical data from the Iraqi insurgency. This work was supported by the Office of Naval Research under grant N00014-13-1-0381.

  7. Climate and change: simulating flooding impacts on urban transport network

    Science.gov (United States)

    Pregnolato, Maria; Ford, Alistair; Dawson, Richard

    2015-04-01

    National-scale climate projections indicate that in the future there will be hotter and drier summers, warmer and wetter winters, together with rising sea levels. The frequency of extreme weather events is expected to increase, causing severe damage to the built environment and disruption of infrastructures (Dawson, 2007), whilst population growth and changed demographics are placing new demands on urban infrastructure. It is therefore essential to ensure infrastructure networks are robust to these changes. This research addresses these challenges by focussing on the development of probabilistic tools for managing risk by modelling urban transport networks within the context of extreme weather events. This paper presents a methodology to investigate the impacts of extreme weather events on urban environment, in particular infrastructure networks, through a combination of climate simulations and spatial representations. By overlaying spatial data on hazard thresholds from a flood model and a flood safety function, mitigated by potential adaptation strategies, different levels of disruption to commuting journeys on road networks are evaluated. The method follows the Catastrophe Modelling approach and it consists of a spatial model, combining deterministic loss models and probabilistic risk assessment techniques. It can be applied to present conditions as well as future uncertain scenarios, allowing the examination of the impacts alongside socio-economic and climate changes. The hazard is determined by simulating free surface water flooding, with the software CityCAT (Glenis et al., 2013). The outputs are overlapped to the spatial locations of a simple network model in GIS, which uses journey-to-work (JTW) observations, supplemented with speed and capacity information. To calculate the disruptive effect of flooding on transport networks, a function relating water depth to safe driving car speed has been developed by combining data from experimental reports (Morris et

  8. Earthquake early warning performance tests for Istanbul

    Science.gov (United States)

    Köhler, N.; Wenzel, F.; Erdik, M.; Alcik, H.; Mert, A.

    2009-04-01

    The Marmara Region is the most densily populated region in Turkey. The greater area of the mega-city Istanbul inhabits about 14 million people. The city is located in the direct vicinity of the Main Marmara Fault, a dextral strike-slip fault system intersecting the Sea of Marmara, which is the western continuation of the North Anatolian Fault [Le Pichon et al., 2001]. Its closest distance to the city of Istanbul ranges between 15 and 20 km. Recent estimates by Parsons [2004] give a probability of more than 40% of a M ≥ 7 earthquake that will affect Istanbul within the next 30 years. Due to this high seismic risk, earthquake early warning is an important task in disaster management and seismic risk reduction, increasing the safety of millions of people living in and around Istanbul and reducing economic losses. The Istanbul Earthquake Rapid Response and Early Warning System (IERREWS) includes a set of 10 strong-motion sensors used for early warning which are installed between Istanbul and the Main Marmara Fault. The system works on the exceedance of amplitude thresholds, whereas three alarm levels are defined at three different thresholds [Erdik et al., 2003]. In the context of the research project EDIM (Earthquake Disaster Information System for the Marmara Region, Turkey), the early warning network is planned to be extended by an additional set of 10 strong-motion sensors installed around the Sea of Marmara to include the greater Marmara Region into the early warning process. We present performance tests of both the existing and the planned extended early warning network using ground motion simulations for 280 synthetic earthquakes along the Main Marmara Fault with moment magnitudes between 4.5 and 7.5. We apply the amplitude thresholds of IERREWS, as well as, for comparison, an early warning algorithm based on artificial neural networks which estimates hypocentral location and magnitude of the occurring earthquake. The estimates are updated continuously with

  9. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  10. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  11. Slip distributions in a suite of dynamic simulations of mega-thrust earthquakes and their comparison with observations

    Science.gov (United States)

    Scala, A.; Murphy, S.; Lorito, S.; Festa, G.; Trasatti, E.; Romano, F.; Piatanesi, A.; Di Toro, G.; Spagnuolo, E.

    2016-12-01

    Stochastic source models offer a fast, effective way of producing heterogeneous slip distributions in hazard assessment procedures. Such models are based on amplitude spectra that represent the slip across a wide range of faulting types and tectonic environments. In hazard assessment, large synthetic data set of slip distributions are generated in this way with the assumption that, on average, the probability for slip is uniform across the fault surface. However, for individual faults, spatial complexities may generate preferential slip distributions and nucleation positions. To incorporate such complexities in the hazard assessment procedure we numerically model the rupture propagation along a Tohoku-like subduction interface. In a pilot study involving 500 simulations where a simplified homogeneous medium, a slip weakening friction law and stochastic initial stress distributions were applied we found that the resulting slip distributions are clustered based on whether the rupture reached the surface or not and on the magnitude. We will investigate further this finding by testing the sensitivity of the slip distribution to the variability of each parameter (e.g. frictional parameters, elastic media, variations in the fault geometry etc.) used as an input to the numerical models. Additionally we will use a heterogeneous elastic medium parameterized taking into account the seismic profiles available for the Tohoku region. A bi-material slip weakening friction law is employed including a non-instantaneous coupling between the frictional strength and the dynamic variation of the normal traction on the fault. The resulting slip distributions will be analysed in terms of size, nucleation location and whether the simulated rupture is able to reach the surface or not. Furthermore, we will compare our results with slip distributions resulting from the inversion of large earthquakes in subduction zones available in the USGS Finite Fault catalogue and other catalogues.

  12. Imaging Simulations for the Korean VLBI Network (KVN

    Directory of Open Access Journals (Sweden)

    Tae-Hyun Jung

    2005-03-01

    Full Text Available The Korean VLBI Network (KVN will open a new field of research in astronomy, geodesy and earth science using the newest three 21m radio telescopes. This will expand our ability to look at the Universe in the millimeter regime. Imaging capability of radio interferometry is highly dependent upon the antenna configuration, source size, declination and the shape of target. In this paper, imaging simulations are carried out with the KVN system configuration. Five test images were used which were a point source, multi-point sources, a uniform sphere with two different sizes compared to the synthesis beam of the KVN and a Very Large Array (VLA image of Cygnus A. The declination for the full time simulation was set as +60 degrees and the observation time range was --6 to +6 hours around transit. Simulations have been done at 22GHz, one of the KVN observation frequency. All these simulations and data reductions have been run with the Astronomical Image Processing System (AIPS software package. As the KVN array has a resolution of about 6 mas (milli arcsecond at 22GHz, in case of model source being approximately the beam size or smaller, the ratio of peak intensity over RMS shows about 10000:1 and 5000:1. The other case in which model source is larger than the beam size, this ratio shows very low range of about 115:1 and 34:1. This is due to the lack of short baselines and the small number of antenna. We compare the coordinates of the model images with those of the cleaned images. The result shows mostly perfect correspondence except in the case of the 12mas uniform sphere. Therefore, the main astronomical targets for the KVN will be the compact sources and the KVN will have an excellent performance in the astrometry for these sources.

  13. Physics-Based Simulations of Natural Hazards

    Science.gov (United States)

    Schultz, Kasey William

    Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed

  14. The Red Atrapa Sismos (Quake Catcher Network in Mexico): assessing performance during large and damaging earthquakes.

    Science.gov (United States)

    Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.

    2015-01-01

    The Quake‐Catcher Network (QCN) is an expanding seismic array made possible by thousands of participants who volunteered time and resources from their computers to record seismic data using low‐cost accelerometers (http://qcn.stanford.edu/; last accessed December 2014). Sensors based on Micro‐Electromechanical Systems (MEMS) technology have rapidly improved over the last few years due to the demand of the private sector (e.g., automobiles, cell phones, and laptops). For strong‐motion applications, low‐cost MEMS accelerometers have promising features due to an increasing resolution and near‐linear phase and amplitude response (Cochran, Lawrence, Christensen, and Jakka, 2009; Clayton et al., 2011; Evans et al., 2014).

  15. Computer simulation of the Blumlein pulse forming network

    International Nuclear Information System (INIS)

    Edwards, C.B.

    1981-03-01

    A computer simulation of the Blumlein pulse-forming network is described. The model is able to treat the case of time varying loads, non-zero conductor resistance, and switch closure effects as exhibited by real systems employing non-ohmic loads such as field-emission vacuum diodes in which the impedance is strongly time and voltage dependent. The application of the code to various experimental arrangements is discussed, with particular reference to the prediction of the behaviour of the output circuit of 'ELF', the electron beam generator in operation at the Rutherford Laboratory. The output from the code is compared directly with experimentally obtained voltage waveforms applied to the 'ELF' diode. (author)

  16. Modeling and simulation of networked automation and control systems in Modelica; Modellierung und Simulation vernetzter Automatisierungs- und Regelungssysteme in Modelica

    Energy Technology Data Exchange (ETDEWEB)

    Frey, Georg; Liu, Liu [Universitaet des Saarlandes, Saarbruecken (Germany). Lehrstuhl fuer Automatisierungstechnik

    2009-07-01

    The use of network technologies in automation systems is increasing. The analysis of the resulting systems by simulation requires libraries of models that describe the temporal behavior of automation components and communication networks. In this paper, such a library is presented. It was developed using the modeling language Modelica. The resulting models can be simulated, for example, in the tool Dymola. The application of the presented models in open-loop response time analysis as well as in closed-loop analysis of networked control systems is illustrated by examples. Additionally, an approach to reduce the computational cost in the resulting hybrid simulation is presented. (orig.)

  17. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  18. Numerical tsunami simulations in the western Pacific Ocean and East China Sea from hypothetical M 9 earthquakes along the Nankai trough

    Science.gov (United States)

    Harada, Tomoya; Satake, Kenji; Furumura, Takashi

    2017-04-01

    We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai

  19. Dynamic rupture simulations of the 2016 Mw7.8 Kaikōura earthquake: a cascading multi-fault event

    Science.gov (United States)

    Ulrich, T.; Gabriel, A. A.; Ampuero, J. P.; Xu, W.; Feng, G.

    2017-12-01

    The Mw7.8 Kaikōura earthquake struck the Northern part of New Zealand's South Island roughly one year ago. It ruptured multiple segments of the contractional North Canterbury fault zone and of the Marlborough fault system. Field observations combined with satellite data suggest a rupture path involving partly unmapped faults separated by large stepover distances larger than 5 km, the maximum distance usually considered by the latest seismic hazard assessment methods. This might imply distant rupture transfer mechanisms generally not considered in seismic hazard assessment. We present high-resolution 3D dynamic rupture simulations of the Kaikōura earthquake under physically self-consistent initial stress and strength conditions. Our simulations are based on recent finite-fault slip inversions that constrain fault system geometry and final slip distribution from remote sensing, surface rupture and geodetic data (Xu et al., 2017). We assume a uniform background stress field, without lateral fault stress or strength heterogeneity. We use the open-source software SeisSol (www.seissol.org) which is based on an arbitrary high-order accurate DERivative Discontinuous Galerkin method (ADER-DG). Our method can account for complex fault geometries, high resolution topography and bathymetry, 3D subsurface structure, off-fault plasticity and modern friction laws. It enables the simulation of seismic wave propagation with high-order accuracy in space and time in complex media. We show that a cascading rupture driven by dynamic triggering can break all fault segments that were involved in this earthquake without mechanically requiring an underlying thrust fault. Our prefered fault geometry connects most fault segments: it does not features stepover larger than 2 km. The best scenario matches the main macroscopic characteristics of the earthquake, including its apparently slow rupture propagation caused by zigzag cascading, the moment magnitude and the overall inferred slip

  20. Predictable earthquakes?

    Science.gov (United States)

    Martini, D.

    2002-12-01

    Summary: A world wide network has been continuously monitoring the secular change of the Earth's physical processes as recorded on the Earth like the geomagnetic field, the Earth's rotation, etc. The database, which has been collected by the observatories, gives us a chance to make a study of the temporal behaviour of the Earth's magnetic field and to understand the features of these and related phenomena. The long-term magnetic field data show a close qualitative relation both to the secular change of climate and to the variation in the sunspot cycle. On the other hand the fluctuations in the Earth's rotation also show a good correlation to the sunspot and climatic phenomena. This is a very important fact because the decade fluctuation in Earth's rotation depends on those streams in the outer core, which produce the long-term variation in the Earth's magnetic field. This result means that it may not be unrealistic to think of a rather strong interaction between the internal and external magnetic fields of the Earth, and the mechanical implications of this interaction. The outer reason(s) of both solar and the mentioned terrestrial physical processes is one of the possible theories, which is able to include and explain these observed facts. The calculated Earth's orbit, perpendicular to the ecliptic plane (so called Z-direction), and rather the 1st derivative in time of this orbital motion (Z-acceleration) is direct relation to the gravitational perturbations of the (primarily giant) planets. Therefore this time series gives us a chance to investigate the dynamical effects of the giant planets on the Earth. We ended up with quite accurate data sets both in the time series of the Earth's rotation (we used the so called dT-time series which is the measure of the cumulative discrepancy of Earth's rotation in time, and length of day [l.o.d.], which is the 1st derivative in time of dT, and the 1st derivative in time of l.o.d., which is related to the rotational

  1. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya'an Earthquake.

    Science.gov (United States)

    Li, Zhichao; Chen, Yao; Suo, Liming

    2015-01-01

    In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants' therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. This paper described a field study in an earthquake-stricken area of Ya'an. A set of 3-stage follow-up data was obtained concerning with the villagers' participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers' social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters.

  2. Using elements of game engine architecture to simulate sensor networks for eldercare.

    Science.gov (United States)

    Godsey, Chad; Skubic, Marjorie

    2009-01-01

    When dealing with a real time sensor network, building test data with a known ground truth is a tedious and cumbersome task. In order to quickly build test data for such a network, a simulation solution is a viable option. Simulation environments have a close relationship with computer game environments, and therefore there is much to be learned from game engine design. In this paper, we present our vision for a simulated in-home sensor network and describe ongoing work on using elements of game engines for building the simulator. Validation results are included to show agreement on motion sensor simulation with the physical environment.

  3. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  4. The Virtual Brain: a simulator of primate brain network dynamics.

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.

  5. Waveform through the subducted plate under the Tokyo region in Japan observed by a ultra-dense seismic network (MeSO-net) and seismic activity around mega-thrust earthquakes area

    Science.gov (United States)

    Sakai, S.; Kasahara, K.; Nanjo, K.; Nakagawa, S.; Tsuruoka, H.; Morita, Y.; Kato, A.; Iidaka, T.; Hirata, N.; Tanada, T.; Obara, K.; Sekine, S.; Kurashimo, E.

    2009-12-01

    In central Japan, the Philippine Sea plate (PSP) subducts beneath the Tokyo Metropolitan area, the Kanto region, where it causes mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9) which had 105,000 fatalities. A M7 or greater earthquake in this region at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates the next great earthquake will cause 11,000 fatalities and 112 trillion yen (1 trillion US$) economic loss. This great earthquake is evaluated to occur with a probability of 70 % in 30 years by the Earthquake Research Committee of Japan. We had started the Special Project for Earthquake Disaster Mitigation in Tokyo Metropolitan area (2007-2012). Under this project, the construction of the Metropolitan Seismic Observation network (MeSO-net) that consists of about 400 observation sites was started [Kasahara et al., 2008; Nakagawa et al., 2008]. Now, we had 178 observation sites. The correlation of the wave is high because the observation point is deployed at about 2 km intervals, and the identification of the later phase is recognized easily thought artificial noise is very large. We also discuss the relation between a deformation of PSP and intra-plate M7+ earthquakes: the PSP is subducting beneath the Honshu arc and also colliding with the Pacific plate. The subduction and collision both contribute active seismicity in the Kanto region. We are going to present a high resolution tomographic image to show low velocity zone which suggests a possible internal failure of the plate; a source region of the M7+ intra-plate earthquake. Our study will contribute a new assessment of the seismic hazard at the Metropolitan area in Japan. Acknowledgement: This study was supported by the Earthquake Research Institute cooperative research program.

  6. Impact of the 2001 Tohoku-oki earthquake to Tokyo Metropolitan area observed by the Metropolitan Seismic Observation network (MeSO-net)

    Science.gov (United States)

    Hirata, N.; Hayashi, H.; Nakagawa, S.; Sakai, S.; Honda, R.; Kasahara, K.; Obara, K.; Aketagawa, T.; Kimura, H.; Sato, H.; Okaya, D. A.

    2011-12-01

    The March 11, 2011 Tohoku-oki earthquake brought a great impact to the Tokyo metropolitan area in both seismological aspect and seismic risk management although Tokyo is located 340 km from the epicenter. The event generated very strong ground motion even in the metropolitan area and resulted severe requifaction in many places of Kanto district. National and local governments have started to discuss counter measurement for possible seismic risks in the area taking account for what they learned from the Tohoku-oki event which is much larger than ever experienced in Japan Risk mitigation strategy for the next greater earthquake caused by the Philippine Sea plate (PSP) subducting beneath the Tokyo metropolitan area is of major concern because it caused past mega-thrust earthquakes, such as the 1703 Genroku earthquake (M8.0) and the 1923 Kanto earthquake (M7.9). An M7 or greater (M7+) earthquake in this area at present has high potential to produce devastating loss of life and property with even greater global economic repercussions. The Central Disaster Management Council of Japan estimates that an M7+ earthquake will cause 11,000 fatalities and 112 trillion yen (about 1 trillion US$) economic loss. In order to mitigate disaster for greater Tokyo, the Special Project for Earthquake Disaster Mitigation in the Tokyo Metropolitan Area was launched in collaboration with scientists, engineers, and social-scientists in nationwide institutions. We will discuss the main results that are obtained in the respective fields which have been integrated to improve information on the strategy assessment for seismic risk mitigation in the Tokyo metropolitan area; the project has been much improved after the Tohoku event. In order to image seismic structure beneath the Metropolitan Tokyo area we have developed Metropolitan Seismic Observation network (MeSO-net; Hirata et al., 2009). We have installed 296 seismic stations every few km (Kasahara et al., 2011). We conducted seismic

  7. The continuous automatic monitoring network installed in Tuscany (Italy) since late 2002, to study earthquake precursory phenomena

    Science.gov (United States)

    Pierotti, Lisa; Cioni, Roberto

    2010-05-01

    Since late 2002, a continuous automatic monitoring network (CAMN) was designed, built and installed in Tuscany (Italy), in order to investigate and define the geochemical response of the aquifers to the local seismic activity. The purpose of the investigation was to identify eventual earthquake precursors. The CAMN is constituted by two groups of five measurement stations each. A first group has been installed in the Serchio and Magra graben (Garfagnana and Lunigiana Valleys, Northern Tuscany), while the second one, in the area of Mt. Amiata (Southern Tuscany), an extinct volcano. Garfagnana, Lunigiana and Mt. Amiata regions belong to the inner zone of the Northern Apennine fold-and-thrust belt. This zone has been involved in the post-collision extensional tectonics since the Upper Miocene-Pliocene. Such tectonic activity has produced horst and graben structures oriented from N-S to NW-SE that are transferred by NE-SW system. Both Garfagnana (Serchio graben) and Lunigiana (Magra graben) belong to the most inner sector of the belt where the seismic sources, responsible for the strongest earthquakes of the northern Apennine, are located (e.g. the M=6.5 earthquake of September 1920). The extensional processes in southern Tuscany have been accompanied by magmatic activity since the Upper Miocene, developing effusive and intrusive products traditionally attributed to the so-called Tuscan Magmatic Province. Mt. Amiata, whose magmatic activity ceased about 0.3 M.y. ago, belongs to the extensive Tyrrhenian sector that is characterized by high heat flow and crustal thinning. The whole zone is characterized by wide-spread but moderate seismicity (the maximum recorded magnitude has been 5.1 with epicentre in Piancastagnaio, 1919). The extensional regime in both the Garfagnana-Lunigiana and Mt. Amiata area is confirmed by the focal mechanisms of recent earthquakes. An essential phase of the monitoring activities has been the selection of suitable sites for the installation of

  8. Performance of Irikura's Recipe Rupture Model Generator in Earthquake Ground Motion Simulations as Implemented in the Graves and Pitarka Hybrid Approach.

    Energy Technology Data Exchange (ETDEWEB)

    Pitarka, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-22

    We analyzed the performance of the Irikura and Miyake (2011) (IM2011) asperity-­ based kinematic rupture model generator, as implemented in the hybrid broadband ground-­motion simulation methodology of Graves and Pitarka (2010), for simulating ground motion from crustal earthquakes of intermediate size. The primary objective of our study is to investigate the transportability of IM2011 into the framework used by the Southern California Earthquake Center broadband simulation platform. In our analysis, we performed broadband (0 -­ 20Hz) ground motion simulations for a suite of M6.7 crustal scenario earthquakes in a hard rock seismic velocity structure using rupture models produced with both IM2011 and the rupture generation method of Graves and Pitarka (2016) (GP2016). The level of simulated ground motions for the two approaches compare favorably with median estimates obtained from the 2014 Next Generation Attenuation-­West2 Project (NGA-­West2) ground-­motion prediction equations (GMPEs) over the frequency band 0.1–10 Hz and for distances out to 22 km from the fault. We also found that, compared to GP2016, IM2011 generates ground motion with larger variability, particularly at near-­fault distances (<12km) and at long periods (>1s). For this specific scenario, the largest systematic difference in ground motion level for the two approaches occurs in the period band 1 – 3 sec where the IM2011 motions are about 20 – 30% lower than those for GP2016. We found that increasing the rupture speed by 20% on the asperities in IM2011 produced ground motions in the 1 – 3 second bandwidth that are in much closer agreement with the GMPE medians and similar to those obtained with GP2016. The potential implications of this modification for other rupture mechanisms and magnitudes are not yet fully understood, and this topic is the subject of ongoing study.

  9. Simulation of a Dispersive Tsunami due to the 2016 El Salvador-Nicaragua Outer-Rise Earthquake (M w 6.9)

    Science.gov (United States)

    Tanioka, Yuichiro; Ramirez, Amilcar Geovanny Cabrera; Yamanaka, Yusuke

    2018-01-01

    The 2016 El Salvador-Nicaragua outer-rise earthquake (M w 6.9) generated a small tsunami observed at the ocean bottom pressure sensor, DART 32411, in the Pacific Ocean off Central America. The dispersive observed tsunami is well simulated using the linear Boussinesq equations. From the dispersive character of tsunami waveform, the fault length and width of the outer-rise event is estimated to be 30 and 15 km, respectively. The estimated seismic moment of 3.16 × 1019 Nm is the same as the estimation in the Global CMT catalog. The dispersive character of the tsunami in the deep ocean caused by the 2016 outer-rise El Salvador-Nicaragua earthquake could constrain the fault size and the slip amount or the seismic moment of the event.

  10. Satellite-matrix-switched, time-division-multiple-access network simulator

    Science.gov (United States)

    Ivancic, William D.; Andro, Monty; Nagy, Lawrence A.; Budinger, James M.; Shalkhauser, Mary Jo

    A versatile experimental Ka-band network simulator has been implemented at the NASA Lewis Research Center to demonstrate and evaluate a satellite-matrix-switched, time-division-multiple-access (SMS-TDMA) network and to evaluate future digital ground terminals and radiofrequency (RF) components. The simulator was implemented by using proof-of-concept RF components developed under NASA contracts and digital ground terminal and link simulation hardware developed at Lewis. This simulator provides many unique capabilities such as satellite range delay and variation simulation and rain fade simulation. All network parameters (e.g., signal-to-noise ratio, satellite range variation rate, burst density, and rain fade) are controlled and monitored by a central computer. The simulator is presently configured as a three-ground-terminal SMS-TDMA network.

  11. Contribution of the Surface and Down-Hole Seismic Networks to the Location of Earthquakes at the Soultz-sous-Forêts Geothermal Site (France)

    Science.gov (United States)

    Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.

    2018-03-01

    Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity

  12. Future planning: default network activity couples with frontoparietal control network and reward-processing regions during process and outcome simulations.

    Science.gov (United States)

    Gerlach, Kathy D; Spreng, R Nathan; Madore, Kevin P; Schacter, Daniel L

    2014-12-01

    We spend much of our daily lives imagining how we can reach future goals and what will happen when we attain them. Despite the prevalence of such goal-directed simulations, neuroimaging studies on planning have mainly focused on executive processes in the frontal lobe. This experiment examined the neural basis of process simulations, during which participants imagined themselves going through steps toward attaining a goal, and outcome simulations, during which participants imagined events they associated with achieving a goal. In the scanner, participants engaged in these simulation tasks and an odd/even control task. We hypothesized that process simulations would recruit default and frontoparietal control network regions, and that outcome simulations, which allow us to anticipate the affective consequences of achieving goals, would recruit default and reward-processing regions. Our analysis of brain activity that covaried with process and outcome simulations confirmed these hypotheses. A functional connectivity analysis with posterior cingulate, dorsolateral prefrontal cortex and anterior inferior parietal lobule seeds showed that their activity was correlated during process simulations and associated with a distributed network of default and frontoparietal control network regions. During outcome simulations, medial prefrontal cortex and amygdala seeds covaried together and formed a functional network with default and reward-processing regions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  14. Discrimination of Cylinders with Different Wall Thicknesses using Neural Networks and Simulated Dolphin Sonar Signals

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whitlow; Larsen, Jan

    1999-01-01

    This paper describes a method integrating neural networks into a system for recognizing underwater objects. The system is based on a combination of simulated dolphin sonar signals, simulated auditory filters and artificial neural networks. The system is tested on a cylinder wall thickness...

  15. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  16. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  17. Modelisation et simulation d'un PON (Passive Optical Network) base ...

    African Journals Online (AJOL)

    English Title: Modeling and simulation of a PON (Passive Optical Network) Based on hybrid technology WDM/TDM. English Abstract. This development is part of dynamism of design for a model combining WDM and TDM multiplexing in the optical network of PON (Passive Optical Network) type, in order to satisfy the high bit ...

  18. Far field tsunami simulations of the 1755 Lisbon earthquake: Implications for tsunami hazard to the U.S. East Coast and the Caribbean

    Science.gov (United States)

    Barkan, R.; ten Brink, Uri S.; Lin, J.

    2009-01-01

    The great Lisbon earthquake of November 1st, 1755 with an estimated moment magnitude of 8.5-9.0 was the most destructive earthquake in European history. The associated tsunami run-up was reported to have reached 5-15??m along the Portuguese and Moroccan coasts and the run-up was significant at the Azores and Madeira Island. Run-up reports from a trans-oceanic tsunami were documented in the Caribbean, Brazil and Newfoundland (Canada). No reports were documented along the U.S. East Coast. Many attempts have been made to characterize the 1755 Lisbon earthquake source using geophysical surveys and modeling the near-field earthquake intensity and tsunami effects. Studying far field effects, as presented in this paper, is advantageous in establishing constraints on source location and strike orientation because trans-oceanic tsunamis are less influenced by near source bathymetry and are unaffected by triggered submarine landslides at the source. Source location, fault orientation and bathymetry are the main elements governing transatlantic tsunami propagation to sites along the U.S. East Coast, much more than distance from the source and continental shelf width. Results of our far and near-field tsunami simulations based on relative amplitude comparison limit the earthquake source area to a region located south of the Gorringe Bank in the center of the Horseshoe Plain. This is in contrast with previously suggested sources such as Marqu??s de Pombal Fault, and Gulf of C??diz Fault, which are farther east of the Horseshoe Plain. The earthquake was likely to be a thrust event on a fault striking ~ 345?? and dipping to the ENE as opposed to the suggested earthquake source of the Gorringe Bank Fault, which trends NE-SW. Gorringe Bank, the Madeira-Tore Rise (MTR), and the Azores appear to have acted as topographic scatterers for tsunami energy, shielding most of the U.S. East Coast from the 1755 Lisbon tsunami. Additional simulations to assess tsunami hazard to the U.S. East

  19. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  20. Building Infrastructure for Preservation and Publication of Earthquake Engineering Research Data

    Directory of Open Access Journals (Sweden)

    Stanislav Pejša

    2014-10-01

    Full Text Available The objective of this paper is to showcase the progress of the earthquake engineering community during a decade-long effort supported by the National Science Foundation in the George E. Brown Jr., Network for Earthquake Engineering Simulation (NEES. During the four years that NEES network operations have been headquartered at Purdue University, the NEEScomm management team has facilitated an unprecedented cultural change in the ways research is performed in earthquake engineering. NEES has not only played a major role in advancing the cyberinfrastructure required for transformative engineering research, but NEES research outcomes are making an impact by contributing to safer structures throughout the USA and abroad. This paper reflects on some of the developments and initiatives that helped instil change in the ways that the earthquake engineering and tsunami community share and reuse data and collaborate in general.

  1. Tsunami simulations of mega-thrust earthquakes in the Nankai–Tonankai Trough (Japan) based on stochastic rupture scenarios

    KAUST Repository

    Goda, Katsuichiro

    2017-02-23

    In this study, earthquake rupture models for future mega-thrust earthquakes in the Nankai–Tonankai subduction zone are developed by incorporating the main characteristics of inverted source models of the 2011 Tohoku earthquake. These scenario ruptures also account for key features of the national tsunami source model for the Nankai–Tonankai earthquake by the Central Disaster Management Council of the Japanese Government. The source models capture a wide range of realistic slip distributions and kinematic rupture processes, reflecting the current best understanding of what may happen due to a future mega-earthquake in the Nankai–Tonankai Trough, and therefore are useful for conducting probabilistic tsunami hazard and risk analysis. A large suite of scenario rupture models is then used to investigate the variability of tsunami effects in coastal areas, such as offshore tsunami wave heights and onshore inundation depths, due to realistic variations in source characteristics. Such investigations are particularly valuable for tsunami hazard mapping and evacuation planning in municipalities along the Nankai–Tonankai coast.

  2. Hydraulic model and steam flow numerical simulation of the Cerro Prieto geothermal field, Mexico, pipeline network

    International Nuclear Information System (INIS)

    García-Gutiérrez, A.; Hernández, A.F.; Martínez, J.I.; Ceceñas, M.; Ovando, R.; Canchola, I.

    2015-01-01

    The development of a hydraulic model and numerical simulation results of the Cerro Prieto geothermal field (CPGF) steam pipeline network are presented. Cerro Prieto is the largest water-dominant geothermal field in the world and its transportation network has 162 producing wells, connected through a network of pipelines that feeds 13 power-generating plants with an installed capacity of 720 MWe. The network is about 125 km long and has parallel high- and low-pressure networks. Prior to this study, it was suspected that steam flow stagnated or reversed from its planned direction in some segments of the network. Yet, the network complexity and extension complicated the analysis of steam transport for adequate delivery to the power plants. Thus, a hydraulic model of the steam transportation system was developed and implemented numerically using an existing simulator, which allowed the overall analysis of the network in order to quantify the pressure and energy losses as well as the steam flow direction in every part of the network. Numerical results of the high-pressure network were obtained which show that the mean relative differences between measured and simulated pressures and flowrates are less than 10%, which is considered satisfactory. Analysis of results led to the detection of areas of opportunity and to the recommendation of changes for improving steam transport. A main contribution of the present work is having simulated satisfactorily the longest (to our knowledge), and probably the most complex, steam pipeline network in the world. - Highlights: • Extensive literature review of flow models of geothermal steam gathering networks. • Hydraulic model of the Cerro Prieto geothermal field steam network. • Selection and validation of the employed pressure-drop model. • Numerical flow simulation of the world's largest geothermal steam gathering network. • Detailed network pressure drop analysis and mapping of steam flow distribution

  3. 'BioNessie(G) - a grid enabled biochemical networks simulation environment

    OpenAIRE

    Liu, X; Jiang, J; Ajayi, O; Gu, X; Gilbert, D; Sinnott, R

    2008-01-01

    The simulation of biochemical networks provides insight and understanding about the underlying biochemical processes and pathways used by cells and organisms. BioNessie is a biochemical network simulator which has been developed at the University of Glasgow. This paper describes the simulator and focuses in particular on how it has been extended to benefit from a wide variety of high performance compute resources across the UK through Grid technologies to support larger scal...

  4. Nowcasting Earthquakes

    Science.gov (United States)

    Rundle, J. B.; Donnellan, A.; Grant Ludwig, L.; Turcotte, D. L.; Luginbuhl, M.; Gail, G.

    2016-12-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system, and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(nearthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(nearthquake cycle in the defined region at the current time.

  5. Earthquake Facts

    Science.gov (United States)

    ... estimated 830,000 people. In 1976 another deadly earthquake struck in Tangshan, China, where more than 250,000 people were killed. Florida and North Dakota have the smallest number of earthquakes in the United States. The deepest earthquakes typically ...

  6. A Hybrid Communications Network Simulation-Independent Toolkit

    National Research Council Canada - National Science Library

    Dines, David M

    2008-01-01

    .... Evolving a grand design of the enabling network will require a flexible evaluation platform to try and select the right combination of network strategies and protocols in the realms of topology control and routing...

  7. Icarus: a caching simulator for information centric networking (ICN)

    OpenAIRE

    Saino, L.; Psaras, I.; Pavlou, G.

    2014-01-01

    Information-Centric Networking (ICN) is a new networking paradigm proposing a shift of the main network abstraction from host identifiers to location-agnostic content identifiers. So far, several architectures have been proposed implementing this paradigm shift. A key feature, common to all proposed architectures, is the in-network caching capability, enabled by the location-agnostic, explicit naming of contents. This aspect, in particular, has recently received considerable attention by ...

  8. A New Artificial Network Approach for Membrane Filtration Simulation

    OpenAIRE

    Vivier, J.; Mehablia, A.

    2012-01-01

    To improve traditional neural networks, the present research used the wavelet network, a special feedforward neural network with a single hidden layer supported by the wavelet theory. Prediction performance and efficiency of the proposed network were examined with a published experimental dataset of cross-flow membrane filtration. The dataset was divided into two parts: 70 samples for training data and 330 samples for testing data. Various combinations of transmembrane pressure, filtration...

  9. Modeling the effects of source and path heterogeneity on ground motions of great earthquakes on the Cascadia Subduction Zone Using 3D simulations

    Science.gov (United States)

    Delorey, Andrew; Frankel, Arthur; Liu, Pengcheng; Stephenson, William J.

    2014-01-01

    We ran finite‐difference earthquake simulations for great subduction zone earthquakes in Cascadia to model the effects of source and path heterogeneity for the purpose of improving strong‐motion predictions. We developed a rupture model for large subduction zone earthquakes based on a k−2 slip spectrum and scale‐dependent rise times by representing the slip distribution as the sum of normal modes of a vibrating membrane.Finite source and path effects were important in determining the distribution of strong motions through the locations of the hypocenter, subevents, and crustal structures like sedimentary basins. Some regions in Cascadia appear to be at greater risk than others during an event due to the geometry of the Cascadia fault zone relative to the coast and populated regions. The southern Oregon coast appears to have increased risk because it is closer to the locked zone of the Cascadia fault than other coastal areas and is also in the path of directivity amplification from any rupture propagating north to south in that part of the subduction zone, and the basins in the Puget Sound area are efficiently amplified by both north and south propagating ruptures off the coast of western Washington. We find that the median spectral accelerations at 5 s period from the simulations are similar to that of the Zhao et al. (2006) ground‐motion prediction equation, although our simulations predict higher amplitudes near the region of greatest slip and in the sedimentary basins, such as the Seattle basin.

  10. 3D Ground-Motion Simulations for Magnitude 9 Earthquakes on the Cascadia Megathrust: Sedimentary Basin Amplification, Rupture Directivity, and Ground-Motion Variability

    Science.gov (United States)

    Frankel, A. D.; Wirth, E. A.; Marafi, N.; Vidale, J. E.; Stephenson, W. J.

    2017-12-01

    We have produced broadband (0-10 Hz) synthetic seismograms for Mw 9 earthquakes on the Cascadia subduction zone by combining synthetics from 3D finite-difference simulations at low frequencies (≤ 1 Hz) and stochastic synthetics at high frequencies (≥ 1 Hz). These synthetic ground motions are being used to evaluate building response, liquefaction, and landslides, as part of the M9 Project of the University of Washington, in collaboration with the U.S. Geological Survey. The kinematic rupture model is composed of high stress drop sub-events with Mw 8, similar to those observed in the Mw 9.0 Tohoku, Japan and Mw 8.8 Maule, Chile earthquakes, superimposed on large background slip with lower slip velocities. The 3D velocity model is based on active and passive-source seismic tomography studies, seismic refraction and reflection surveys, and geologic constraints. The Seattle basin portion of the model has been validated by simulating ground motions from local earthquakes. We have completed 50 3D simulations of Mw 9 earthquakes using a variety of hypocenters, slip distributions, sub-event locations, down-dip limits of rupture, and other parameters. For sites not in deep sedimentary basins, the response spectra of the synthetics for 0.1-6.0 s are similar, on average, to the values from the BC Hydro ground motion prediction equations (GMPE). For periods of 7-10 s, the synthetic response spectra exceed these GMPE, partially due to the shallow dip of the plate interface. We find large amplification factors of 2-5 for response spectra at periods of 1-10 s for locations in the Seattle and Tacoma basins, relative to sites outside the basins. This amplification depends on the direction of incoming waves and rupture directivity. The basin amplification is caused by surface waves generated at basin edges from incoming S-waves, as well as amplification and focusing of S-waves and surface waves by the 3D basin structure. The inter-event standard deviation of response spectral

  11. Simulation of broad-band strong ground motion for a hypothetical Mw 7.1 earthquake on the Enriquillo Fault in Haiti

    Science.gov (United States)

    Douilly, Roby; Mavroeidis, George P.; Calais, Eric

    2017-10-01

    The devastating 2010 Mw 7.0 Haiti earthquake demonstrated the need to improve mitigation and preparedness for future seismic events in the region. Previous studies have shown that the earthquake did not occur on the Enriquillo Fault, the main plate boundary fault running through the heavily populated Port-au-Prince region, but on the nearby and previously unknown transpressional Léogâne Fault. Slip on that fault has increased stresses on the segment of Enriquillo Fault to the east of Léogâne, which terminates in the ˜3-million-inhabitant capital city of Port-au-Prince. In this study, we investigate ground shaking in the vicinity of Port-au-Prince, if a hypothetical rupture similar to the 2010 Haiti earthquake occurred on that segment of the Enriquillo Fault. We use a finite element method and assumptions on regional tectonic stress to simulate the low-frequency ground motion components using dynamic rupture propagation for a 52-km-long segment. We consider eight scenarios by varying parameters such as hypocentre location, initial shear stress and fault dip. The high-frequency ground motion components are simulated using the specific barrier model in the context of the stochastic modeling approach. The broad-band ground motion synthetics are subsequently obtained by combining the low-frequency components from the dynamic rupture simulation with the high-frequency components from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. Results show that rupture on a vertical Enriquillo Fault generates larger horizontal permanent displacements in Léogâne and Port-au-Prince than rupture on a south-dipping Enriquillo Fault. The mean horizontal peak ground acceleration (PGA), computed at several sites of interest throughout Port-au-Prince, has a value of ˜0.45 g, whereas the maximum horizontal PGA in Port-au-Prince is ˜0.60 g. Even though we only consider a limited number of rupture scenarios, our results suggest more intense ground

  12. How Crime Spreads Through Imitation in Social Networks: A Simulation Model

    Science.gov (United States)

    Punzo, Valentina

    In this chapter an agent-based model for investigating how crime spreads through social networks is presented. Some theoretical issues related to the sociological explanation of crime are tested through simulation. The agent-based simulation allows us to investigate the relative impact of some mechanisms of social influence on crime, within a set of controlled simulated experiments.

  13. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    NARCIS (Netherlands)

    Halkes, G.P.; Langendoen, K.G.

    2010-01-01

    The evaluation ofMAC protocols forWireless Sensor Networks (WSNs) is often performed through simulation. These simulations necessarily abstract away from reality inmany ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover,

  14. Broadband simulation of M7.2 earthquake on the north tehran fault, considering nonlinear soil effects

    Science.gov (United States)

    Majidinejad, A.; Zafarani, H.; Vahdani, S.

    2018-02-01

    The north Tehran fault (NTF) is known to be one of the most drastic sources of seismic hazard on the city of Tehran. In this study we provide broadband (0-10 Hz) ground motions for the city as a consequence of probable M7.2 earthquake on the NTF. Low frequency motions (0-2 Hz) are provided from spectral element dynamic simulation of 17 scenario models. High frequency (2-10 Hz) motions are calculated with a physics based method based on S-to-S backscattering theory. Broadband ground motions at the bed-rock level show amplifications, both at low and high frequencies, due to the existence of deep Tehran basin in the vicinity of the NTF. By employing soil profiles obtained from regional studies, effect of shallow soil layers on broadband ground motions is investigated by both linear and nonlinear analyses. While linear soil response overestimate ground motion prediction equations, nonlinear response predicts plausible results within one standard deviation of empirical relationships. Average PGAs at the northern, central and southern parts of the city are estimated about 0.93 g, 0.59 g and 0.4 g, respectively. Increased damping caused by nonlinear soil behavior, reduces the soil linear responses considerably, in particular at frequencies above 3 Hz. Nonlinear de-amplification reduces linear spectral accelerations up to 63 per cent at stations above soft thick sediments. By performing more general analyses, which exclude source-to-site effects on stations, a correction function is proposed for typical site classes of Tehran. Parameters for the function which reduces linear soil response in order to take into account nonlinear soil de-amplification are provided for various frequencies in the range of engineering interest. In addition to fully nonlinear analyses, equivalent-linear calculations were also conducted which their comparison revealed appropriateness of the method for large peaks and low frequencies, but its shortage for small to medium peaks and motions with

  15. Limits to high-speed simulations of spiking neural networks using general-purpose computers

    Directory of Open Access Journals (Sweden)

    Friedemann eZenke

    2014-09-01

    Full Text Available To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed towards synaptic plasticity. In particular spike-timing-dependent plasticity (STDP creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  16. Computational Approach for Improving Three-Dimensional Sub-Surface Earth Structure for Regional Earthquake Hazard Simulations in the San Francisco Bay Area

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-25

    In our Exascale Computing Project (ECP) we seek to simulate earthquake ground motions at much higher frequency than is currently possible. Previous simulations in the SFBA were limited to 0.5-1 Hz or lower (Aagaard et al. 2008, 2010), while we have recently simulated the response to 5 Hz. In order to improve confidence in simulated ground motions, we must accurately represent the three-dimensional (3D) sub-surface material properties that govern seismic wave propagation over a broad region. We are currently focusing on the San Francisco Bay Area (SFBA) with a Cartesian domain of size 120 x 80 x 35 km, but this area will be expanded to cover a larger domain. Currently, the United States Geologic Survey (USGS) has a 3D model of the SFBA for seismic simulations. However, this model suffers from two serious shortcomings relative to our application: 1) it does not fit most of the available low frequency (< 1 Hz) seismic waveforms from moderate (magnitude M 3.5-5.0) earthquakes; and 2) it is represented with much lower resolution than necessary for the high frequency simulations (> 5 Hz) we seek to perform. The current model will serve as a starting model for full waveform tomography based on 3D sensitivity kernels. This report serves as the deliverable for our ECP FY2017 Quarter 4 milestone to FY 2018 “Computational approach to developing model updates”. We summarize the current state of 3D seismic simulations in the SFBA and demonstrate the performance of the USGS 3D model for a few selected paths. We show the available open-source waveform data sets for model updates, based on moderate earthquakes recorded in the region. We present a plan for improving the 3D model utilizing the available data and further development of our SW4 application. We project how the model could be improved and present options for further improvements focused on the shallow geotechnical layers using dense passive recordings of ambient and human-induced noise.

  17. Modeling and simulation of the data communication network at the ASRM Facility

    Science.gov (United States)

    Nirgudkar, R. P.; Moorhead, R. J.; Smith, W. D.

    1994-01-01

    This paper describes the modeling and simulation of the communication network for the NASA Advanced Solid Rocket Motor (ASRM) facility under construction at Yellow Creek near Luka, Mississippi. Manufacturing, testing, and operations at the ASRM site will be performed in different buildings scattered over an 1800 acre site. These buildings are interconnected through a local area network (LAN), which will contain one logical Fiber Distributed Data Interface (FDDI) ring acting as a backbone for the whole complex. The network contains approximately 700 multi-vendor workstations, 22 multi-vendor workcells, and 3 VAX clusters interconnected via Ethernet and FDDI. The different devices produce appreciably different traffic patterns, each pattern will be highly variable, and some patterns will be very bursty. Most traffic is between the VAX clusters and the other devices. Comdisco's Block Oriented Network Simulator (BONeS) has been used for network simulation. The two primary evaluation parameters used to judge the expected network performance are throughput and delay.

  18. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  19. NCC simulation model. Phase 2: Simulating the operations of the Network Control Center and NCC message manual

    Science.gov (United States)

    Benjamin, Norman M.; Gill, Tepper; Charles, Mary

    1994-01-01

    The network control center (NCC) provides scheduling, monitoring, and control of services to the NASA space network. The space network provides tracking and data acquisition services to many low-earth orbiting spacecraft. This report describes the second phase in the development of simulation models for the FCC. Phase one concentrated on the computer systems and interconnecting network.Phase two focuses on the implementation of the network message dialogs and the resources controlled by the NCC. Performance measures were developed along with selected indicators of the NCC's operational effectiveness.The NCC performance indicators were defined in terms of the following: (1) transfer rate, (2) network delay, (3) channel establishment time, (4) line turn around time, (5) availability, (6) reliability, (7) accuracy, (8) maintainability, and (9) security. An NCC internal and external message manual is appended to this report.

  20. Impacts of Social Network on Therapeutic Community Participation: A Follow-up Survey of Data Gathered after Ya’an Earthquake

    Science.gov (United States)

    LI, Zhichao; CHEN, Yao; SUO, Liming

    2015-01-01

    Abstract Background In recent years, natural disasters and the accompanying health risks have become more frequent, and rehabilitation work has become an important part of government performance. On one hand, social networks play an important role in participants’ therapeutic community participation and physical & mental recovery. On the other hand, therapeutic communities with widespread participation can also contribute to community recovery after disaster. Methods This paper described a field study in an earthquake-stricken area of Ya’an. A set of 3-stage follow-up data was obtained concerning with the villagers’ participation in therapeutic community, social network status, demographic background, and other factors. The Hierarchical linear Model (HLM) method was used to investigate the determinants of social network on therapeutic community participation. Results First, social networks have significantly impacts on the annual changes of therapeutic community participation. Second, there were obvious differences in education between groups mobilized by the self-organization and local government. However, they all exerted the mobilization force through the acquaintance networks. Third, local cadre networks of villagers could negatively influence the activities of self-organized therapeutic community, while with positively influence in government-organized therapeutic activities. Conclusion This paper suggests that relevant government departments need to focus more on the reconstruction and cultivation of villagers’ social network and social capital in the process of post-disaster recovery. These findings contribute to better understandings of how social networks influence therapeutic community participation, and what role local government can play in post-disaster recovery and public health improvement after natural disasters. PMID:26060778

  1. Source Process of the Mw 5.0 Au Sable Forks, New York, Earthquake Sequence from Local Aftershock Monitoring Network Data

    Science.gov (United States)

    Kim, W.; Seeber, L.; Armbruster, J. G.

    2002-12-01

    On April 20, 2002, a Mw 5 earthquake occurred near the town of Au Sable Forks, northeastern Adirondacks, New York. The quake caused moderate damage (MMI VII) around the epicentral area and it is well recorded by over 50 broadband stations in the distance ranges of 70 to 2000 km in the Eastern North America. Regional broadband waveform data are used to determine source mechanism and focal depth using moment tensor inversion technique. Source mechanism indicates predominantly thrust faulting along 45° dipping fault plane striking due South. The mainshock is followed by at least three strong aftershocks with local magnitude (ML) greater than 3 and about 70 aftershocks are detected and located in the first three months by a 12-station portable seismographic network. The aftershock distribution clearly delineate the mainshock rupture to the westerly dipping fault plane at a depth of 11 to 12 km. Preliminary analysis of the aftershock waveform data indicates that orientation of the P-axis rotated 90° from that of the mainshock, suggesting a complex source process of the earthquake sequence. We achieved an important milestone in monitoring earthquakes and evaluating their hazards through rapid cross-border (Canada-US) and cross-regional (Central US-Northeastern US) collaborative efforts. Hence, staff at Instrument Software Technology, Inc. near the epicentral area joined Lamont-Doherty staff and deployed the first portable station in the epicentral area; CERI dispatched two of their technical staff to the epicentral area with four accelerometers and a broadband seismograph; the IRIS/PASSCAL facility shipped three digital seismographs and ancillary equipment within one day of the request; the POLARIS Consortium, Canada sent a field crew of three with a near real-time, satellite telemetry based earthquake monitoring system. The Polaris station, KSVO, powered by a solar panel and batteries, was already transmitting data to the central Hub in London, Ontario, Canada within

  2. Optimization of neural networks for time-domain simulation of mooring lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Winther, Ole

    2016-01-01

    When using artificial neural networks in methods for dynamic analysis of slender structures, the computational effort associated with time-domain response simulation may be reduced drastically compared to classic solution strategies. This article demonstrates that the network structure...... of an artificial neural network, which has been trained to simulate forces in a mooring line of a floating offshore platform, can be optimized and reduced by different optimization procedures. The procedures both detect and prune the least salient network weights successively, and besides trimming the network......, they also can be used to rank the importance of the various network inputs. The dynamic response of slender marine structures often depends on several external load components, and by applying the optimization procedures to a trained artificial neural network, it is possible to classify the external force...

  3. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  4. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  5. A novel controller based on robust backstepping and neural network for flight motion simulator

    Science.gov (United States)

    Liu, Zhenghua; Wu, Yunjie; Wang, Weihong

    2008-10-01

    The flight motion simulator is one kind of servo system with uncertainties and disturbances. To obtain high performance and good robustness for the flight simulator, we present a robust compound controller base on Backstepping controller and BP neural network. Firstly, the design procedure of the robust Backstepping controller is described and correlative problems are proposed. Secondly, the principle and the design process of BP neural network are analyzed and expatiated respectively. Finally, simulation results on the flight simulator show that the BP neural network can compensate external disturbances including system input and output disturbance and the system performance can be improved. Therefore both robustness and high performance of the flight simulator are achieved. It is an applied technology for the control of servo system, such as the flight motion simulator.

  6. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Senk, Johanna; Diesmann, Markus

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  7. Global permanent deformations triggered by the Sumatra earthquake

    OpenAIRE

    Boschi, E.; Casarotti, E.; Devoti, R.; Melini, D.; Piersanti, A.; Pietrantonio, G.; Riguzzi, F.

    2005-01-01

    The giant Sumatra-Andaman earthquake of December 26 2004 caused permanent deformations effects in a region of previously never observed extension. The GPS data from the world wide network of permanent IGS sites show significant coseismic displacements in an area exceeding 107 km^2. The effects of the permanent residual deformation field could be detected as far as Australia, the Phillipines and Japanese archipelagos, and, on the West, as far as the indian continent. The synthetic simulations ...

  8. Seismic velocity model of the central United States (Version 1): Description and simulation of the 18 April 2008 Mt. Carmel, Illinois, Earthquake

    Science.gov (United States)

    Ramírez‐Guzmán, Leonardo; Boyd, Oliver S.; Hartzell, Stephen; Williams, Robert A.

    2012-01-01

    We have developed a new three‐dimensional seismic velocity model of the central United States (CUSVM) that includes the New Madrid Seismic Zone (NMSZ) and covers parts of Arkansas, Mississippi, Alabama, Illinois, Missouri, Kentucky, and Tennessee. The model represents a compilation of decades of crustal research consisting of seismic, aeromagnetic, and gravity profiles; geologic mapping; geophysical and geological borehole logs; and inversions of the regional seismic properties. The density, P‐ and S‐wave velocities are synthesized in a stand‐alone spatial database that can be queried to generate the required input for numerical seismic‐wave propagation simulations. We test and calibrate the CUSVM by simulating ground motions of the 18 April 2008 Mw 5.4 Mt. Carmel, Illinois, earthquake and comparing the results with observed records within the model area. The selected stations in the comparisons reflect different geological site conditions and cover distances ranging from 10 to 430 km from the epicenter. The results, based on a qualitative and quantitative goodness‐of‐fit (GOF) characterization, indicate that both within and outside the Mississippi Embayment the CUSVM reasonably reproduces: (1) the body and surface‐wave arrival times and (2) the observed regional variations in ground‐motion amplitude, cumulative energy, duration, and frequency content up to a frequency of 1.0 Hz. In addition, we discuss the probable structural causes for the ground‐motion patterns in the central United States that we observed in the recorded motions of the 18 April Mt. Carmel earthquake.

  9. Enterprise Networks for Competences Exchange: A Simulation Model

    Science.gov (United States)

    Remondino, Marco; Pironti, Marco; Pisano, Paola

    A business process is a set of logically related tasks performed to achieve a defined business and related to improving organizational processes. Process innovation can happen at various levels: incrementally, redesign of existing processes, new processes. The knowledge behind process innovation can be shared, acquired, changed and increased by the enterprises inside a network. An enterprise can decide to exploit innovative processes it owns, thus potentially gaining competitive advantage, but risking, in turn, that other players could reach the same technological levels. Or it could decide to share it, in exchange for other competencies or money. These activities could be the basis for a network formation and/or impact the topology of an existing network. In this work an agent based model is introduced (E3), aiming to explore how a process innovation can facilitate network formation, affect its topology, induce new players to enter the market and spread onto the network by being shared or developed by new players.

  10. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-01

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies. PMID:29316614

  11. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2018-01-01

    Full Text Available This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  12. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    Science.gov (United States)

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  13. Analysis simulation of tectonic earthquake impact to the lifetime of radioactive waste container and equivalent dose rate predication in Yucca Mountain geologic repository, Nevada test site, USA

    International Nuclear Information System (INIS)

    Ko, I.S.; Imardjoko, Y.U.; Karnawati, Dwikorita

    2003-01-01

    US policy not to recycle her spent nuclear fuels brings consequence to provide a nuclear waste repository site Yucca Mountain in Nevada, USA, considered the proper one. High-level radioactive waste to be placed into containers and then will be buried in three hundred meter underground tunnels. Tectonic earthquake is the main factor causing container's damage. Goldsim version 6.04.007 simulates mechanism of container's damage due to a great devastating impact load, the collapse of the tunnels. Radionuclide inventories included are U-234, C-14, Tc-99, I-129, Se-79, Pa-231, Np-237, Pu-242, and Pu-239. Simulation carried out in 100,000 years time span. The research goals are: 1). Estimating tunnels stan-up time, and 2). Predicting the equivalent dose rate contributed by the included radionuclides to the human due to radioactive polluted drinking water intake. (author)

  14. Power Electronic Building Block Network Simulation Testbed Stability Criteria and Hardware Validation Studies

    National Research Council Canada - National Science Library

    Badorf, Michael

    1997-01-01

    ... the survivability of the platform. The Power Electronic Building Block (PEBB) Network Simulation Testbed currently under construction at the Naval Postgraduate School is a study into the feasibility of such DC systems...

  15. Accelerated Gillespie Algorithm for Gas–Grain Reaction Network Simulations Using Quasi-steady-state Assumption

    Science.gov (United States)

    Chang, Qiang; Lu, Yang; Quan, Donghui

    2017-12-01

    Although the Gillespie algorithm is accurate in simulating gas–grain reaction networks, so far its computational cost is so expensive that it cannot be used to simulate chemical reaction networks that include molecular hydrogen accretion or the chemical evolution of protoplanetary disks. We present an accelerated Gillespie algorithm that is based on a quasi-steady-state assumption with the further approximation that the population distribution of transient species depends only on the accretion and desorption processes. The new algorithm is tested against a few reaction networks that are simulated by the regular Gillespie algorithm. We found that the less likely it is that transient species are formed and destroyed on grain surfaces, the more accurate the new method is. We also apply the new method to simulate reaction networks that include molecular hydrogen accretion. The results show that surface chemical reactions involving molecular hydrogen are not important for the production of surface species under standard physical conditions of dense molecular clouds.

  16. ABCDecision: A Simulation Platform for Access Selection Algorithms in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Guy Pujolle

    2010-01-01

    Full Text Available We present a simulation platform for access selection algorithms in heterogeneous wireless networks, called “ABCDecision”. The simulator implements the different parts of an Always Best Connected (ABC system, including Access Technology Selector (ATS, Radio Access Networks (RANs, and users. After describing the architecture of the simulator, we show an overview of the existing decision algorithms for access selection. Then we propose a new selection algorithm in heterogeneous networks and we run a set of simulations to evaluate the performance of the proposed algorithm in comparison with the existing ones. The performance results, in terms of the occupancy rate, show that our algorithm achieves a load balancing distribution between networks by taking into consideration the capacities of the available cells.

  17. Method for Building a Medical Training Simulator with Bayesian Networks: SimDeCS.

    Science.gov (United States)

    Flores, Cecilia Dias; Fonseca, João Marcelo; Bez, Marta Rosecler; Respício, Ana; Coelho, Helder

    2014-01-01

    Distance education has grown in importance with the advent of the internet. An adequate evaluation of students in this mode is still difficult. Distance tests or occasional on-site exams do not meet the needs of evaluation of the learning process for distance education. Bayesian networks are adequate for simulating several aspects of clinical reasoning. The possibility of integrating them in distance education student evaluation has not yet been explored much. The present work describes a Simulator based on probabilistic networks built to represent knowledge of clinical practice guidelines in Family and Community Medicine. The Bayesian Network, the basis of the simulator, was modeled to playable by the student, to give immediate feedback according to pedagogical strategies adapted to the student according to past performance, and to give a broad evaluation of performance at the end of the game. Simulators structured by Bayesian Networks may become alternatives in the evaluation of students of Medical Distance Education.

  18. Social network disruption as a major factor associated with psychological distress 3 years after the 2004 Niigata-Chuetsu earthquake in Japan.

    Science.gov (United States)

    Oyama, Mari; Nakamura, Kazutoshi; Suda, Yuko; Someya, Toshiyuki

    2012-03-01

    The 2004 Niigata-Chuetsu earthquake of Japan caused a great deal of damage, and people living in the affected region are still struggling to reconstruct their lives. The aim of this study was to determine factors associated with psychological distress in people living in a town at the epicenter 3 years after the earthquake. We conducted a cross-sectional study from June 2007 to January 2008. Participants included 225 individuals living in Kawaguchi (age ≥20 years) who reported psychological symptoms. Information on family structure, employment status, alcohol use, social network, and extent of house damage was elicited by public health nurses conducting structured interviews. Levels of psychological distress were assessed with the Kessler Psychological Distress Scale (K10), with a K10 score ≥25 defined as psychological distress. The mean age of participants was 66.1 ± 12.9 years. The prevalence of psychological distress varied among different employment classes, being 5/73 (6.8%) for participants with paid employment, 12/50 (24.0%) for full-time housewives, and 11/101 (10.9%) for those who were unemployed (χ(2) = 8.42, P = 0.015). It also varied between participants who had lost contact with people in the community and those who had no change in social contact [9/20 (45.0%) vs. 19/189 (10.1%), respectively; χ(2) = 19.04, P earthquake psychological distress and require appropriate care.

  19. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Directory of Open Access Journals (Sweden)

    Wen Zhou

    Full Text Available As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  20. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Science.gov (United States)

    Zhou, Wen; Koptyug, Nikita; Ye, Shutao; Jia, Yifan; Lu, Xiaolong

    2016-01-01

    As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  1. Developing an Agent-Based Simulation System for Post-Earthquake Operations in Uncertainty Conditions: A Proposed Method for Collaboration among Agents

    Directory of Open Access Journals (Sweden)

    Navid Hooshangi

    2018-01-01

    Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.

  2. The design and calibration of a simulation model of a star computer network

    CERN Document Server

    Gomaa, H

    1982-01-01

    A simulation model of the CERN(European Organization for Nuclear Research) SPS star computer network is described. The model concentrates on simulating the message handling computer, through which all messages in the network pass. The paper describes the main features of the model, the transfer time parameters in the model and how performance measurements were used to assist in the calibration of the model.

  3. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...... platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations...

  4. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  5. Evaluating networked systems: finding credible results using analysis, simulation, and testbed methods

    OpenAIRE

    Fund, Fraida

    2016-01-01

    How do we understand the performance of a networked system? These lecture notes explore the relative advantages and disadvantages of analytical, simulation, and experimental approaches in computer networks, how they complement each other, and how to generate credible results using each approach.

  6. Discrete-event simulation of a wide-area health care network.

    Science.gov (United States)

    McDaniel, J G

    1995-01-01

    Predict the behavior and estimate the telecommunication cost of a wide-area message store-and-forward network for health care providers that uses the telephone system. A tool with which to perform large-scale discrete-event simulations was developed. Network models for star and mesh topologies were constructed to analyze the differences in performances and telecommunication costs. The distribution of nodes in the network models approximates the distribution of physicians, hospitals, medical labs, and insurers in the Province of Saskatchewan, Canada. Modeling parameters were based on measurements taken from a prototype telephone network and a survey conducted at two medical clinics. Simulation studies were conducted for both topologies. For either topology, the telecommunication cost of a network in Saskatchewan is projected to be less than $100 (Canadian) per month per node. The estimated telecommunication cost of the star topology is approximately half that of the mesh. Simulations predict that a mean end-to-end message delivery time of two hours or less is achievable at this cost. A doubling of the data volume results in an increase of less than 50% in the mean end-to-end message transfer time. The simulation models provided an estimate of network performance and telecommunication cost in a specific Canadian province. At the expected operating point, network performance appeared to be relatively insensitive to increases in data volume. Similar results might be anticipated in other rural states and provinces in North America where a telephone-based network is desired.

  7. Less Developed Countries Energy System Network Simulator, LDC-ESNS: a brief description

    Energy Technology Data Exchange (ETDEWEB)

    Reisman, A; Malone, R

    1978-04-01

    Prepared for the Brookhaven National Laboratory Developing Countries Energy Program, this report describes the Less Developed Countries Energy System Network Simulator (LDC-ESNS), a tool which provides a quantitative representation of the energy system of an LDC. The network structure of the energy supply and demand system, the model inputs and outputs, and the possible uses of the model for analysis are described.

  8. Transport link scanner: simulating geographic transport network expansion through individual investments

    NARCIS (Netherlands)

    Koopmans, C.C.; Jacobs, C.G.W.

    2016-01-01

    This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness

  9. Lines of Sight in the "Network Society": Simulation, Art Education, and a Digital Visual Culture

    Science.gov (United States)

    Sweeny, Robert W.

    2004-01-01

    Contemporary societies are in the process of developing digital technological networks that simultaneously result in their transformation. The operations of networked computer systems, based in forms of simulation, have shifted general notions of visuality within a visual culture. Practices in art education that address these contemporary…

  10. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    International Nuclear Information System (INIS)

    Ding, Yi; Wang, Peng; Goel, Lalit; Billinton, Roy; Karki, Rajesh

    2007-01-01

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  11. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  12. Earthquakes Sources Parameter Estimation of 20080917 and 20081114 Near Semangko Fault, Sumatra Using Three Components of Local Waveform Recorded by IA Network Station

    Directory of Open Access Journals (Sweden)

    Madlazim

    2012-04-01

    Full Text Available The 17/09/2008 22:04:80 UTC and 14/11/2008 00:27:31.70 earthquakes near Semangko fault were analyzed to identify the fault planes. The two events were relocated to assess physical insight against the hypocenter uncertainty. The datas used to determine source parameters of both earthquakes were three components of local waveform recorded by Geofon broadband IA network stations, (MDSI, LWLI, BLSI and RBSI for the event of 17/09/2008 and (MDSI, LWLI, BLSI and KSI for the event of 14/11/2008. Distance from the epicenter to all station was less than 5°. Moment tensor solution of two events was simultaneously analyzed by determination of the centroid position. Simultaneous analysis covered hypocenter position, centroid position and nodal planes of two events indicated Semangko fault planes. Considering that the Semangko fault zone is a high seismicity area, the identification of the seismic fault is important for the seismic hazard investigation in the region.

  13. Use of simulated neural networks of aerial image classification

    Science.gov (United States)

    Medina, Frances I.; Vasquez, Ramon

    1991-01-01

    The utility of one layer neural network in aerial image classification is examined. The network was trained with the delta rule. This method was shown to be useful as a classifier in aerial images with good resolution. It is fast, it is easy to implement, because it is distribution-free, nothing about statistical distribution of the data is needed, and it is very efficient as a boundary detector.

  14. 3D Dynamic Rupture process ans Near Source Ground Motion Simulation Using the Discrete Element Method: Application to the 1999 Chi-chi and 2000 Tottori Earthquakes

    Science.gov (United States)

    Dalguer Gudiel, L. A.; Irikura, K.

    2001-12-01

    We performed a 3D model to simulate the dynamic rupture of a pre-existing fault and near-source ground motion of actual earthquakes solving the elastodynamic equation of motion using the 3D Discrete Element Method (DEM). The DEM is widely employed in engineering to designate lumped mass models in a truss arrangement, as opposed to FEM (Finite Element) models that may also consist of lumped masses, but normally require to mount a full stiffness matrix for response determination. The term has also been used for models of solids consisting of assemblies of discrete elements, such as spheres in elastic contact, employed in the analysis of perforation or penetration of concrete or rock. It should be noted that the designation Lattice Models, common in Physics, may be more adequate, although it omits reference to a fundamental property of the approach, which is the lumped-mass representation. In the present DEM formulation, the method models any orthotropic elastic solid. It is constructed by a three dimensional periodic truss-like structures using cubic elements that consists of lumping masses in nodal points, which are interconnected by unidimensional elements. The method was previously used in 2D to simulate in a simplified way the 1999 Chi-chi (Taiwan) earthquake (Dalguer et. al., 2000). Now the method was extended to resolve 3D problems. We apply the model to simulate the dynamic rupture process and near source ground motion of the 1999 Chi-chi (Taiwan) and the 2000 Tottori (Japan) earthquakes. The attractive feature in the problem under consideration is the possibility of introducing internal cracks or fractures with little computational effort and without increasing the number of degrees of freedom. For the 3D dynamic spontaneous rupture simulation of these eartquakes we need to know: the geometry of the fault, the initial stress distribution along the fault, the stress drop distribution, the strength of the fault to break and the critical slip (because slip

  15. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  16. Increasing Learner Retention in a Simulated Learning Network using Indirect Social Interaction

    NARCIS (Netherlands)

    Koper, Rob

    2004-01-01

    Please refer to original publication: Koper, E.J.R. (2005). Increasing Learner Retention in a Simulated Learning Network Using Indirect Social Interaction. Journal of Artificial Societies and Social Simulation vol. 8, no. 2. http://jasss.soc.surrey.ac.uk/8/2/5.html Software is only stored to ensure

  17. Optimizing targeted vaccination across cyber-physical networks: an empirically based mathematical simulation study

    DEFF Research Database (Denmark)

    Mones, Enys; Stopczynski, Arkadiusz; Pentland, Alex 'Sandy'

    2018-01-01

    . If interruption of disease transmission is the goal, targeting requires knowledge of underlying person-to-person contact networks. Digital communication networks may reflect not only virtual but also physical interactions that could result in disease transmission, but the precise overlap between these cyber......-to-person physical contact network to determine whether cyber communication networks can be harnessed to advance the goal of targeted vaccination for a disease spreading on the network of physical proximity. We show that individuals selected on the basis of their closeness centrality within cyber networks (what we...... and physical networks has never been empirically explored in real-life settings. Here, we study the digital communication activity of more than 500 individuals along with their person-to-person contacts at a 5-min temporal resolution. We then simulate different disease transmission scenarios on the person...

  18. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    Science.gov (United States)

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure-saturation curves. The predicted network residual styrene blob-size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous-phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  19. Undead earthquakes

    Science.gov (United States)

    Musson, R. M. W.

    This short communication deals with the problem of fake earthquakes that keep returning into circulation. The particular events discussed are some very early earthquakes supposed to have occurred in the U.K., which all originate from a single enigmatic 18th century source.

  20. A simulated annealing heuristic for maximum correlation core/periphery partitioning of binary networks.

    Directory of Open Access Journals (Sweden)

    Michael Brusco

    Full Text Available A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.

  1. A simulated annealing heuristic for maximum correlation core/periphery partitioning of binary networks.

    Science.gov (United States)

    Brusco, Michael; Stolze, Hannah J; Hoffman, Michaela; Steinley, Douglas

    2017-01-01

    A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.

  2. Complex Network Simulation of Forest Network Spatial Pattern in Pearl River Delta

    Science.gov (United States)

    Zeng, Y.

    2017-09-01

    Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network's power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network's degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network's main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc.) for networking a standard and base datum.

  3. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  4. Sensitivity of broad-band ground-motion simulations to earthquake source and Earth structure variations: an application to the Messina Straits (Italy)

    KAUST Repository

    Imperatori, W.

    2012-03-01

    In this paper, we investigate ground-motion variability due to different faulting approximations and crustal-model parametrizations in the Messina Straits area (Southern Italy). Considering three 1-D velocity models proposed for this region and a total of 72 different source realizations, we compute broad-band (0-10 Hz) synthetics for Mw 7.0 events using a fault plane geometry recently proposed. We explore source complexity in terms of classic kinematic (constant rise-time and rupture speed) and pseudo-dynamic models (variable rise-time and rupture speed). Heterogeneous slip distributions are generated using a Von Karman autocorrelation function. Rise-time variability is related to slip, whereas rupture speed variations are connected to static stress drop. Boxcar, triangle and modified Yoffe are the adopted source time functions. We find that ground-motion variability associated to differences in crustal models is constant and becomes important at intermediate and long periods. On the other hand, source-induced ground-motion variability is negligible at long periods and strong at intermediate-short periods. Using our source-modelling approach and the three different 1-D structural models, we investigate shaking levels for the 1908 Mw 7.1 Messina earthquake adopting a recently proposed model for fault geometry and final slip. Our simulations suggest that peak levels in Messina and Reggio Calabria must have reached 0.6-0.7 g during this earthquake.

  5. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as...... are identified and the power flow in steady-state and during fault conditions is investigated. The efficiency of the network is determined in full-load conditions. Furthermore, key design aspects of such a grid are illustrated and issues regarding ripple current and converter design are treated. The overall...

  6. Analog earthquakes

    International Nuclear Information System (INIS)

    Hofmann, R.B.

    1995-01-01

    Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository

  7. Application of neural network technology to setpoint control of a simulated reactor experiment loop

    International Nuclear Information System (INIS)

    Cordes, G.A.; Bryan, S.R.; Powell, R.H.; Chick, D.R.

    1991-01-01

    This paper describes the design, implementation, and application of artificial neural networks to achieve temperature and flow rate control for a simulation of a typical experiment loop in the Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). The goal of the project was to research multivariate, nonlinear control using neural networks. A loop simulation code was adapted for the project and used to create a training set and test the neural network controller for comparison with the existing loop controllers. The results for the best neural network design are documented and compared with existing loop controller action. The neural network was shown to be as accurate at loop control as the classical controllers in the operating region represented by the training set. 5 refs., 8 figs., 3 tabs

  8. Overview of DOS attacks on wireless sensor networks and experimental results for simulation of interference attacks

    Directory of Open Access Journals (Sweden)

    Željko Gavrić

    2018-01-01

    Full Text Available Wireless sensor networks are now used in various fields. The information transmitted in the wireless sensor networks is very sensitive, so the security issue is very important. DOS (denial of service attacks are a fundamental threat to the functioning of wireless sensor networks. This paper describes some of the most common DOS attacks and potential methods of protection against them. The case study shows one of the most frequent attacks on wireless sensor networks – the interference attack. In the introduction of this paper authors assume that the attack interference can cause significant obstruction of wireless sensor networks. This assumption has been proved in the case study through simulation scenario and simulation results.

  9. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    2000-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model...... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....

  10. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    1999-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model...... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....

  11. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  12. Earthquake Early Warning: Real-time Testing of an On-site Method Using Waveform Data from the Southern California Seismic Network

    Science.gov (United States)

    Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.

    2007-12-01

    We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the

  13. Efficient heuristics for simulating rare events in queuing networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.

    2008-01-01

    In this thesis we propose state-dependent importance sampling heuristics to estimate the probability of population overflow in queuing networks. These heuristics capture state-dependence along the boundaries (when one or more queues are almost empty) which is crucial for the asymptotic efficiency of

  14. Simulation of traffic capacity of inland waterway network

    NARCIS (Netherlands)

    Chen, L.; Mou, J.; Ligteringen, H.

    2013-01-01

    The inland waterborne transportation is viewed as an economic, safe and environmentally friendly alternative to the congested road network. The traffic capacity are the critical indicator of the inland shipping performance. Actually, interacted under the complicated factors, it is challenging to

  15. Numerical simulation with finite element and artificial neural network ...

    Indian Academy of Sciences (India)

    Further, this database after the neural network training; is used to analyse measured material properties of different test pieces. The ANN predictions are reconfirmed with contact type finite element analysis for an arbitrary selected test sample. The methodology evolved in this work can be extended to predict material ...

  16. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as...

  17. Efficient Heuristics for Simulating Population Overflow in Parallel Networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.; Nicola, V.F.

    2006-01-01

    In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overflow in networks of parallel queues. This heuristic approximates the “optimal��? state-dependent change of measure without the need for costly optimization involved in other

  18. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  19. Effect of coseismic and postseismic deformation on homogeneous and layered half-space and spherical analysis: Model simulation of the 2006 Java, Indonesia, tsunami earthquake

    Science.gov (United States)

    Gunawan, Endra; Meilano, Irwan; Hanifa, Nuraini Rahma; Widiyantoro, Sri

    2017-12-01

    We simulate surface displacements calculated on homogeneous and layered half-space and spherical models as applied to the coseismic and postseismic (afterslip and viscoelastic relaxation) of the 2006 Java tsunami earthquake. Our analysis of coseismic and afterslip deformation suggests that the homogeneous half-space model generates a much broader displacement effect than the layered half-space and spherical models. Also, though the result for surface displacements is similar for the layered half-space and spherical models, noticeable displacements still occurred on top of the coseismic fault patches. Our displacement result in afterslip modeling suggests that significant displacements occurred on top of the main afterslip fault patches, differing from the viscoelastic relaxation model, which has displacements in the front region of coseismic fault patches. We propose this characteristic as one of the important features differentiating a postseismic deformation signal from afterslip and viscoelastic relaxation detected by geodetic data.

  20. Simulation and measurement of optical access network with different types of optical-fiber amplifiers

    Science.gov (United States)

    Latal, Jan; Vogl, Jan; Koudelka, Petr; Vitasek, Jan; Siska, Petr; Liner, Andrej; Papes, Martin; Vasinek, Vladimir

    2012-01-01

    The optical access networks are nowadays swiftly developing in the telecommunications field. These networks can provide higher data transfer rates, and have great potential to the future in terms of transmission possibilities. Many local internet providers responded to these facts and began gradually installing optical access networks into their originally built networks, mostly based on wireless communication. This allowed enlargement of possibilities for end-users in terms of high data rates and also new services such as Triple play, IPTV (Internet Protocol television) etc. However, with this expansion and building-up is also related the potential of reach in case of these networks. Big cities, such as Prague, Brno, Ostrava or Olomouc cannot be simply covered, because of their sizes and also because of their internal regulations given by various organizations in each city. Standard logical and also physical reach of EPON (IEEE 802.3ah - Ethernet Passive Optical Network) optical access network is about 20 km. However, for networks based on Wavelength Division Multiplex the reach can be up to 80 km, if the optical-fiber amplifier is inserted into the network. This article deals with simulation of different types of amplifiers for WDM-PON (Wavelength Division Multiplexing-Passive Optical Network) network in software application Optiwave OptiSystem and than are the values from the application and from real measurement compared.

  1. COMPLEX NETWORK SIMULATION OF FOREST NETWORK SPATIAL PATTERN IN PEARL RIVER DELTA

    Directory of Open Access Journals (Sweden)

    Y. Zeng

    2017-09-01

    Full Text Available Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network’s power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network’s degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network’s main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc. for networking a standard and base datum.

  2. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  3. Satellite Network Performance Measurements Using Simulated Multi-User Internet Traffic

    Science.gov (United States)

    Kruse, Hans; Allman, Mark; Griner, Jim; Ostermann, Shawn; Helvey, Eric

    1999-01-01

    As a number of diverse satellite systems (both Low Earth Orbit and Geostationary systems) are being designed and deployed, it becomes increasingly important to be able to test these systems under realistic traffic loads. While software simulations can provide valuable input into the system design process, it is crucial that the physical system be tested so that actual network devices can be employed and tuned. These tests need to utilize traffic patterns that closely mirror the expected user load, without the need to actually deploy an end-user network for the test. In this paper, we present trafgen. trafgen uses statistical information about the characteristics of sampled network traffic to emulate the same type of traffic over the test network. This paper compares sampled terrestrial network traffic with emulated satellite network traffic over the NASA ACTS satellite.

  4. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  5. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  6. Multiple Linear Regression Model Based on Neural Network and Its Application in the MBR Simulation

    Directory of Open Access Journals (Sweden)

    Chunqing Li

    2012-01-01

    Full Text Available The computer simulation of the membrane bioreactor MBR has become the research focus of the MBR simulation. In order to compensate for the defects, for example, long test period, high cost, invisible equipment seal, and so forth, on the basis of conducting in-depth study of the mathematical model of the MBR, combining with neural network theory, this paper proposed a three-dimensional simulation system for MBR wastewater treatment, with fast speed, high efficiency, and good visualization. The system is researched and developed with the hybrid programming of VC++ programming language and OpenGL, with a multifactor linear regression model of affecting MBR membrane fluxes based on neural network, applying modeling method of integer instead of float and quad tree recursion. The experiments show that the three-dimensional simulation system, using the above models and methods, has the inspiration and reference for the future research and application of the MBR simulation technology.

  7. XNsim: Internet-Enabled Collaborative Distributed Simulation via an Extensible Network

    Science.gov (United States)

    Novotny, John; Karpov, Igor; Zhang, Chendi; Bedrossian, Nazareth S.

    2007-01-01

    In this paper, the XNsim approach to achieve Internet-enabled, dynamically scalable collaborative distributed simulation capabilities is presented. With this approach, a complete simulation can be assembled from shared component subsystems written in different formats, that run on different computing platforms, with different sampling rates, in different geographic locations, and over singlelmultiple networks. The subsystems interact securely with each other via the Internet. Furthermore, the simulation topology can be dynamically modified. The distributed simulation uses a combination of hub-and-spoke and peer-topeer network topology. A proof-of-concept demonstrator is also presented. The XNsim demonstrator can be accessed at http://www.jsc.draver.corn/xn that hosts various examples of Internet enabled simulations.

  8. Simulation of Radiation Heat Transfer in a VAR Furnace Using an Electrical Resistance Network

    Science.gov (United States)

    Ballantyne, A. Stewart

    The use of electrical resistance networks to simulate heat transfer is a well known analytical technique that greatly simplifies the solution of radiation heat transfer problems. In a VAR furnace, radiative heat transfer occurs between the ingot, electrode, and crucible wall; and the arc when the latter is present during melting. To explore the relative heat exchange between these elements, a resistive