WorldWideScience

Sample records for adaptively smoothed seismicity

  1. Adaptively Smoothed Seismicity Earthquake Forecasts for Italy

    CERN Document Server

    Werner, M J; Jackson, D D; Kagan, Y Y; Wiemer, S

    2010-01-01

    We present a model for estimating the probabilities of future earthquakes of magnitudes m > 4.95 in Italy. The model, a slightly modified version of the one proposed for California by Helmstetter et al. (2007) and Werner et al. (2010), approximates seismicity by a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog and a longer instrumental and historical catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and trustworthy, we used small earthquakes m>2.95 to illuminate active fault structur...

  2. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  3. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    Science.gov (United States)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-03-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  4. Seismic hazard estimation of northern Iran using smoothed seismicity

    Science.gov (United States)

    Khoshnevis, Naeem; Taborda, Ricardo; Azizzadeh-Roodpish, Shima; Cramer, Chris H.

    2017-07-01

    This article presents a seismic hazard assessment for northern Iran, where a smoothed seismicity approach has been used in combination with an updated seismic catalog and a ground motion prediction equation recently found to yield good fit with data. We evaluate the hazard over a geographical area including the seismic zones of Azerbaijan, the Alborz Mountain Range, and Kopeh-Dagh, as well as parts of other neighboring seismic zones that fall within our region of interest. In the chosen approach, seismic events are not assigned to specific faults but assumed to be potential seismogenic sources distributed within regular grid cells. After performing the corresponding magnitude conversions, we decluster both historical and instrumental seismicity catalogs to obtain earthquake rates based on the number of events within each cell, and smooth the results to account for the uncertainty in the spatial distribution of future earthquakes. Seismicity parameters are computed for each seismic zone separately, and for the entire region of interest as a single uniform seismotectonic region. In the analysis, we consider uncertainties in the ground motion prediction equation, the seismicity parameters, and combine the resulting models using a logic tree. The results are presented in terms of expected peak ground acceleration (PGA) maps and hazard curves at selected locations, considering exceedance probabilities of 2 and 10% in 50 years for rock site conditions. According to our results, the highest levels of hazard are observed west of the North Tabriz and east of the North Alborz faults, where expected PGA values are between about 0.5 and 1 g for 10 and 2% probability of exceedance in 50 years, respectively. We analyze our results in light of similar estimates available in the literature and offer our perspective on the differences observed. We find our results to be helpful in understanding seismic hazard for northern Iran, but recognize that additional efforts are necessary to

  5. Seismic hazard estimation of northern Iran using smoothed seismicity

    Science.gov (United States)

    Khoshnevis, Naeem; Taborda, Ricardo; Azizzadeh-Roodpish, Shima; Cramer, Chris H.

    2017-03-01

    This article presents a seismic hazard assessment for northern Iran, where a smoothed seismicity approach has been used in combination with an updated seismic catalog and a ground motion prediction equation recently found to yield good fit with data. We evaluate the hazard over a geographical area including the seismic zones of Azerbaijan, the Alborz Mountain Range, and Kopeh-Dagh, as well as parts of other neighboring seismic zones that fall within our region of interest. In the chosen approach, seismic events are not assigned to specific faults but assumed to be potential seismogenic sources distributed within regular grid cells. After performing the corresponding magnitude conversions, we decluster both historical and instrumental seismicity catalogs to obtain earthquake rates based on the number of events within each cell, and smooth the results to account for the uncertainty in the spatial distribution of future earthquakes. Seismicity parameters are computed for each seismic zone separately, and for the entire region of interest as a single uniform seismotectonic region. In the analysis, we consider uncertainties in the ground motion prediction equation, the seismicity parameters, and combine the resulting models using a logic tree. The results are presented in terms of expected peak ground acceleration (PGA) maps and hazard curves at selected locations, considering exceedance probabilities of 2 and 10% in 50 years for rock site conditions. According to our results, the highest levels of hazard are observed west of the North Tabriz and east of the North Alborz faults, where expected PGA values are between about 0.5 and 1 g for 10 and 2% probability of exceedance in 50 years, respectively. We analyze our results in light of similar estimates available in the literature and offer our perspective on the differences observed. We find our results to be helpful in understanding seismic hazard for northern Iran, but recognize that additional efforts are necessary to

  6. Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis

    Science.gov (United States)

    Woo, Gordon

    2017-04-01

    For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.

  7. Did you smooth your well logs the right way for seismic interpretation?

    Science.gov (United States)

    Duchesne, Mathieu J.; Gaillot, Philippe

    2011-12-01

    Correlations between physical properties and seismic reflection data are useful to determine the geological nature of seismic reflections and the lateral extent of geological strata. The difference in resolution between well logs and seismic data is a major hurdle faced by seismic interpreters when tying both data sets. In general, log data have a resolution of at least two orders of magnitude greater than seismic data. Smoothing physical property logs improves correlation at the seismic scale. Three different approaches were used and compared to smooth a density log: binomial filtering, seismic wavelet filtering and discrete wavelet transform (DWT) filtering. Regression plots between the density logs and the acoustic impedance show that the data smoothed with the DWT is the only method that preserves the original relationship between the raw density data and the acoustic impedance. Smoothed logs were then used to generate synthetic seismograms that were tied to seismic data at the borehole site. Best ties were achieved using the synthetic seismogram computed with the density log processed with the DWT. The good performance of the DWT is explained by its adaptive multi-scale characteristic which preserved significant local changes of density on the high-resolution data series that were also pictured at the seismic scale. Since synthetic seismograms are generated using smoothed logs, the choice of the smoothing method impacts on the quality of seismic-to-well ties. This ultimately can have economical implications during hydrocarbon exploration or exploitation phases.

  8. Assessing a 3D smoothed seismicity model of induced earthquakes

    Science.gov (United States)

    Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan

    2016-04-01

    As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.

  9. An Adaptable Seismic Data Format

    Science.gov (United States)

    Krischer, Lion; Smith, James; Lei, Wenjie; Lefebvre, Matthieu; Ruan, Youyi; de Andrade, Elliott Sales; Podhorszki, Norbert; Bozdağ, Ebru; Tromp, Jeroen

    2016-11-01

    We present ASDF, the Adaptable Seismic Data Format, a modern and practical data format for all branches of seismology and beyond. The growing volume of freely available data coupled with ever expanding computational power opens avenues to tackle larger and more complex problems. Current bottlenecks include inefficient resource usage and insufficient data organization. Properly scaling a problem requires the resolution of both these challenges, and existing data formats are no longer up to the task. ASDF stores any number of synthetic, processed or unaltered waveforms in a single file. A key improvement compared to existing formats is the inclusion of comprehensive meta information, such as event or station information, in the same file. Additionally, it is also usable for any non-waveform data, for example, cross-correlations, adjoint sources or receiver functions. Last but not least, full provenance information can be stored alongside each item of data, thereby enhancing reproducibility and accountability. Any data set in our proposed format is self-describing and can be readily exchanged with others, facilitating collaboration. The utilization of the HDF5 container format grants efficient and parallel I/O operations, integrated compression algorithms and check sums to guard against data corruption. To not reinvent the wheel and to build upon past developments, we use existing standards like QuakeML, StationXML, W3C PROV and HDF5 wherever feasible. Usability and tool support are crucial for any new format to gain acceptance. We developed mature C/Fortran and Python based APIs coupling ASDF to the widely used SPECFEM3D_GLOBE and ObsPy toolkits.

  10. Image segmentation on adaptive edge-preserving smoothing

    Science.gov (United States)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  11. Numerical modeling of seismic waves using frequency-adaptive meshes

    Science.gov (United States)

    Hu, Jinyin; Jia, Xiaofeng

    2016-08-01

    An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.

  12. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  13. Adaptive Optical Phase Estimation Using Time-Symmetric Quantum Smoothing

    CERN Document Server

    Wheatley, T A; Yonezawa, H; Nakane, D; Arao, H; Pope, D T; Ralph, T C; Wiseman, H M; Furusawa, A; Huntington, E H

    2009-01-01

    Quantum parameter estimation has many applications, from gravitational wave detection to quantum key distribution. We present the first experimental demonstration of the time-symmetric technique of quantum smoothing. We consider both adaptive and non-adaptive quantum smoothing, and show that both are better than their well-known time-asymmetric counterparts (quantum filtering). For the problem of estimating a stochastically varying phase shift on a coherent beam, our theory predicts that adaptive quantum smoothing (the best scheme) gives an estimate with a mean-square error up to $2\\sqrt{2}$ times smaller than that from non-adaptive quantum filtering (the standard quantum limit). The experimentally measured improvement is $2.24 \\pm 0.14$.

  14. Model regularization for seismic traveltime tomography with an edge-preserving smoothing operator

    Science.gov (United States)

    Zhang, Xiong; Zhang, Jie

    2017-03-01

    The solutions of the seismic first-arrival traveltime tomography are generally non-unique, and the Tikhonov model regularization for the inversion is commonly used to stabilize the inversion. However, the Tikhonov regularization for traveltime tomography often produces a low-resolution velocity model. To sharpen the velocity edges for the traveltime tomographic results and fit data at the same time, we should apply the edge-preserving concepts to regularize the inversion. In this study, we develop a new model regularization method by introducing an edge-preserving smoothing operator to detect the model edges in traveltime tomography. This edge-preserving smoothing operator is previously used in processing seismic images for enhancing resolution. We design three synthetic velocity models with sharp interfaces and with or without velocity gradients to study the performance of the regularization method with the edge-preserving smoothing operator. The new edge-preserving regularization not only sharpens the model edges but also maintains the smoothness of the velocity gradient in the layer.

  15. Improving Adaptive Importance Sampling Simulation of Markovian Queueing Models using Non-parametric Smoothing

    NARCIS (Netherlands)

    Woudt, Edwin; de Boer, Pieter-Tjerk; van Ommeren, Jan C.W.

    2007-01-01

    Previous work on state-dependent adaptive importance sampling techniques for the simulation of rare events in Markovian queueing models used either no smoothing or a parametric smoothing technique, which was known to be non-optimal. In this paper, we introduce the use of kernel smoothing in this con

  16. Length adaptation of smooth muscle contractile filaments in response to sustained activation.

    Science.gov (United States)

    Stålhand, Jonas; Holzapfel, Gerhard A

    2016-05-21

    Airway and bladder smooth muscles are known to undergo length adaptation under sustained contraction. This adaptation process entails a remodelling of the intracellular actin and myosin filaments which shifts the peak of the active force-length curve towards the current length. Smooth muscles are therefore able to generate the maximum force over a wide range of lengths. In contrast, length adaptation of vascular smooth muscle has attracted very little attention and only a handful of studies have been reported. Although their results are conflicting on the existence of a length adaptation process in vascular smooth muscle, it seems that, at least, peripheral arteries and arterioles undergo such adaptation. This is of interest since peripheral vessels are responsible for pressure regulation, and a length adaptation will affect the function of the cardiovascular system. It has, e.g., been suggested that the inward remodelling of resistance vessels associated with hypertension disorders may be related to smooth muscle adaptation. In this study we develop a continuum mechanical model for vascular smooth muscle length adaptation by assuming that the muscle cells remodel the actomyosin network such that the peak of the active stress-stretch curve is shifted towards the operating point. The model is specialised to hamster cheek pouch arterioles and the simulated response to stepwise length changes under contraction. The results show that the model is able to recover the salient features of length adaptation reported in the literature.

  17. THE ADAPTIVE SMOOTHING FILTERS OF SENSOR SIGNALS IN THE MICROAVIONIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. A. Malkin

    2012-01-01

    Full Text Available The adaptive for intensivity of measuring noise filters for smooth of sensor signals are considered. The adaptation are realized at the expense of the statistical processing of the filtering errors. The algorithm of adaptive filter coefficients calculation and modeling results are presented.

  18. Adaptive finite difference for seismic wavefield modelling in acoustic media.

    Science.gov (United States)

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-08-05

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang's optimised finite difference scheme.

  19. An Adaptable Seismic Data Format for Modern Scientific Workflows

    Science.gov (United States)

    Smith, J. A.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Podhorszki, N.; Tromp, J.

    2013-12-01

    Data storage, exchange, and access play a critical role in modern seismology. Current seismic data formats, such as SEED, SAC, and SEG-Y, were designed with specific applications in mind and are frequently a major bottleneck in implementing efficient workflows. We propose a new modern parallel format that can be adapted for a variety of seismic workflows. The Adaptable Seismic Data Format (ASDF) features high-performance parallel read and write support and the ability to store an arbitrary number of traces of varying sizes. Provenance information is stored inside the file so that users know the origin of the data as well as the precise operations that have been applied to the waveforms. The design of the new format is based on several real-world use cases, including earthquake seismology and seismic interferometry. The metadata is based on the proven XML schemas StationXML and QuakeML. Existing time-series analysis tool-kits are easily interfaced with this new format so that seismologists can use robust, previously developed software packages, such as ObsPy and the SAC library. ADIOS, netCDF4, and HDF5 can be used as the underlying container format. At Princeton University, we have chosen to use ADIOS as the container format because it has shown superior scalability for certain applications, such as dealing with big data on HPC systems. In the context of high-performance computing, we have implemented ASDF into the global adjoint tomography workflow on Oak Ridge National Laboratory's supercomputer Titan.

  20. The effects of smooth pursuit adaptation on the gain of visuomotor transmission in monkeys

    Directory of Open Access Journals (Sweden)

    Seiji eOno

    2013-12-01

    Full Text Available Smooth pursuit eye movements are supported by visual-motor systems, where visual motion information is transformed into eye movement commands. Adaptation of the visuomotor systems for smooth pursuit is an important factor to maintain pursuit accuracy and high acuity vision. Short-term adaptation of initial pursuit gain can be produced experimentally using by repeated trials of a step-ramp tracking with two different velocities (double-step paradigm that step-up (10–30 °/s or step-down (20–5 °/s. It is also known that visuomotor gain during smooth pursuit is regulated by a dynamic gain control mechanism by showing that eye velocity evoked by a target perturbation during pursuit increases bidirectionally when ongoing pursuit velocity is higher. However, it remains uncertain how smooth pursuit adaptation alters the gain of visuomotor transmission. Therefore, a single cycle of sinusoidal motion (2.5 Hz, ± 10 °/s was introduced during step-ramp tracking pre- and post-adaptation to determine whether smooth pursuit adaptation affects the perturbation response. The results showed that pursuit adaptation had a significant effect on the perturbation response that was specific to the adapted direction. These results indicate that there might be different visuomotor mechanisms between adaptation and dynamic gain control. Furthermore, smooth pursuit adaptation altered not only the gain of the perturbation response, but also the gain slope (regression curve at different target velocities (5, 10 and 15 °/s. Therefore, pursuit adaptation could affect the dynamic regulation of the visuomotor gain at different pursuit velocities.

  1. On the adaptive daily forecasting of seismic aftershock hazard

    Science.gov (United States)

    Ebrahimian, Hossein; Jalayer, Fatemeh; Asprone, Domenico; Lombardi, Anna Maria; Marzocchi, Warner; Prota, Andrea; Manfredi, Gaetano

    2013-04-01

    Post-earthquake ground motion hazard assessment is a fundamental initial step towards time-dependent seismic risk assessment for buildings in a post main-shock environment. Therefore, operative forecasting of seismic aftershock hazard forms a viable support basis for decision-making regarding search and rescue, inspection, repair, and re-occupation in a post main-shock environment. Arguably, an adaptive procedure for integrating the aftershock occurrence rate together with suitable ground motion prediction relations is key to Probabilistic Seismic Aftershock Hazard Assessment (PSAHA). In the short-term, the seismic hazard may vary significantly (Jordan et al., 2011), particularly after the occurrence of a high magnitude earthquake. Hence, PSAHA requires a reliable model that is able to track the time evolution of the earthquake occurrence rates together with suitable ground motion prediction relations. This work focuses on providing adaptive daily forecasts of the mean daily rate of exceeding various spectral acceleration values (the aftershock hazard). Two well-established earthquake occurrence models suitable for daily seismicity forecasts associated with the evolution of an aftershock sequence, namely, the modified Omori's aftershock model and the Epidemic Type Aftershock Sequence (ETAS) are adopted. The parameters of the modified Omori model are updated on a daily basis using Bayesian updating and based on the data provided by the ongoing aftershock sequence based on the methodology originally proposed by Jalayer et al. (2011). The Bayesian updating is used also to provide sequence-based parameter estimates for a given ground motion prediction model, i.e. the aftershock events in an ongoing sequence are exploited in order to update in an adaptive manner the parameters of an existing ground motion prediction model. As a numerical example, the mean daily rates of exceeding specific spectral acceleration values are estimated adaptively for the L'Aquila 2009

  2. An adaptive segment method for smoothing lidar signal based on noise estimation

    Science.gov (United States)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  3. Chi-squared smoothed adaptive particle-filtering based prognosis

    Science.gov (United States)

    Ley, Christopher P.; Orchard, Marcos E.

    2017-01-01

    This paper presents a novel form of selecting the likelihood function of the standard sequential importance sampling/re-sampling particle filter (SIR-PF) with a combination of sliding window smoothing and chi-square statistic weighting, so as to: (a) increase the rate of convergence of a flexible state model with artificial evolution for online parameter learning (b) improve the performance of a particle-filter based prognosis algorithm. This is applied and tested with real data from oil total base number (TBN) measurements from three haul trucks. The oil data has high measurement uncertainty and an unknown phenomenological state model. Performance of the proposed algorithm is benchmarked against the standard form of SIR-PF estimation which utilises the Normal (Gaussian) likelihood function. Both implementations utilise the same particle filter based prognosis algorithm so as to provide a common comparison. A sensitivity analysis is also performed to further explore the effects of the combination of sliding window smoothing and chi-square statistic weighting to the SIR-PF.

  4. Seismic Structure in Southern Peru: Evidence for a Smooth Contortion Between Flat and Normal Subduction of the Nazca Plate

    Science.gov (United States)

    Dougherty, S. L.; Clayton, R. W.

    2014-12-01

    Rapid changes in slab geometry are typically associated with fragmentation of the subducted plate; however, continuous curvature of the slab is also possible. The transition from flat to normal subduction in southern Peru is one such geometrical change, where previous studies have suggested both tearing and continuity of the slab. The morphology of the subducted Nazca plate along this transition is further explored here using intraslab earthquakes recorded by temporary regional seismic arrays. Observations of a gradual increase in slab dip coupled with a lack of any gaps or vertical offsets in the intraslab seismicity suggest a smooth contortion of the slab. Concentrations of focal mechanisms at orientations which are indicative of slab bending are also observed along the change in slab geometry. The presence of a thin ultra-slow velocity layer (USL) atop the horizontal Nazca slab is identified and located. The lateral extent of this USL is coincident with the margin of the projected linear continuation of the subducting Nazca Ridge, implying a causal relationship. Waveform modeling of the 2D structure in southern Peru using a finite-difference algorithm provides constraints on the velocity and geometry of the slab's seismic structure and confirms the absence of any tears in the slab. The seismic and structural evidence suggests smooth contortion of the Nazca plate along the transition from flat to normal subduction. The slab is estimated to have experienced 10% strain in the along-strike direction across this transition, compared to 15% strain across flat-to-normal transitions in central Mexico where the Cocos slab is likely torn.

  5. Introducing ADAPTSMOOTH, a new code for the adaptive smoothing of astronomical images

    CERN Document Server

    Zibetti, Stefano

    2009-01-01

    We introduce and publicly release a new code, ADAPTSMOOTH, which serves to smooth astronomical images in an adaptive fashion, in order to enhance the signal-to-noise ratio (S/N). The adaptive smoothing scheme allows to take full advantage of the spatially resolved photometric information contained in an image in that at any location the minimal smoothing is applied to reach the requested S/N. Support is given to match more images on the same smoothing length, such that proper estimates of local colours can be done, with a big potential impact on multi-wavelength studies of extended sources (galaxies, nebulae). Different modes to estimate local S/N are provided. In addition to classical arithmetic-mean averaging mode, the code can operate in median averaging mode, resulting in a significant enhancement of the final image quality and very accurate flux conservation. To this goal also other code options are implemented and discussed in this paper. Finally, we analyze in great detail the effect of the adaptive sm...

  6. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    Science.gov (United States)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  7. Application of Adaptive Extended Kalman Smoothing on INS/WSN Integration System for Mobile Robot Indoors

    Directory of Open Access Journals (Sweden)

    Xiyuan Chen

    2013-01-01

    Full Text Available The inertial navigation systems (INS/wireless sensor network (WSN integration system for mobile robot is proposed for navigation information indoors accurately and continuously. The Kalman filter (KF is widely used for real-time applications with the aim of gaining optimal data fusion. In order to improve the accuracy of the navigation information, this work proposed an adaptive extended Kalman smoothing (AEKS which utilizes inertial measuring units (IMUs and ultrasonic positioning system. In this mode, the adaptive extended Kalman filter (AEKF is used to improve the accuracy of forward Kalman filtering (FKF and backward Kalman filtering (BKF, and then the AEKS and the average filter are used between two output timings for the online smoothing. Several real indoor tests are done to assess the performance of the proposed method. The results show that the proposed method can reduce the error compared with the INS-only, least squares (LS solution, and AEKF.

  8. Stability of bumps in piecewise smooth neural fields with nonlinear adaptation

    KAUST Repository

    Kilpatrick, Zachary P.

    2010-06-01

    We study the linear stability of stationary bumps in piecewise smooth neural fields with local negative feedback in the form of synaptic depression or spike frequency adaptation. The continuum dynamics is described in terms of a nonlocal integrodifferential equation, in which the integral kernel represents the spatial distribution of synaptic weights between populations of neurons whose mean firing rate is taken to be a Heaviside function of local activity. Discontinuities in the adaptation variable associated with a bump solution means that bump stability cannot be analyzed by constructing the Evans function for a network with a sigmoidal gain function and then taking the high-gain limit. In the case of synaptic depression, we show that linear stability can be formulated in terms of solutions to a system of pseudo-linear equations. We thus establish that sufficiently strong synaptic depression can destabilize a bump that is stable in the absence of depression. These instabilities are dominated by shift perturbations that evolve into traveling pulses. In the case of spike frequency adaptation, we show that for a wide class of perturbations the activity and adaptation variables decouple in the linear regime, thus allowing us to explicitly determine stability in terms of the spectrum of a smooth linear operator. We find that bumps are always unstable with respect to this class of perturbations, and destabilization of a bump can result in either a traveling pulse or a spatially localized breather. © 2010 Elsevier B.V. All rights reserved.

  9. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    Science.gov (United States)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for

  10. A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.

    Science.gov (United States)

    Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin

    2009-05-01

    We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.

  11. HAZGRIDX: earthquake forecasting model for ML≥ 5.0 earthquakes in Italy based on spatially smoothed seismicity

    Directory of Open Access Journals (Sweden)

    Aybige Akinci

    2010-11-01

    Full Text Available We present a five-year, time-independent, earthquake-forecast model for earthquake magnitudes of 5.0 and greater in Italy using spatially smoothed seismicity data. The model is called HAZGRIDX, and it was developed based on the assumption that future earthquakes will occur near locations of historical earthquakes; it does not take into account any information from tectonic, geological, or geodetic data. Thus HAZGRIDX is based on observed earthquake occurrence from seismicity data, without considering any physical model. In the present study, we calculate earthquake rates on a spatial grid platform using two declustered catalogs: 1 the Parametric catalog of Italian earthquakes (Catalogo Parametrico dei Terremoti Italiani, CPTI04 that contains the larger earthquakes from MW 7.0 since 1100; and 2 the Italian seismicity catalogue (Catalogo della Sismicità Italiana, CSI 1.1 that contains the small earthquakes down to ML 1.0, with a maximum of ML 5.9, over the past 22 years (1981-2003. The model assumes that earthquake magnitudes follow the Gutenberg-Richter law, with a uniform b-value. The forecast rates are presented in terms of the expected numbers of ML>5.0 events per year for each grid cell of about 10 km × 10 km. The final map is derived by averaging the earthquake potentials that come from these two different catalogs: CPTI04 and CSI 1.1. We also describe the earthquake occurrences in terms of probabilities of occurrence of one event within a specified magnitude bin, DM0.1, in a five year time period. HAZGRIDX is one of several forecasting models, scaled to five and ten years, that have been submitted to the Collaboratory for the Study of Earthquake Probability (CSEP forecasting center in ETH, Zurich, to be tested for Italy.

  12. Propagating adaptive-weighted vector median filter for motion-field smoothing

    Institute of Scientific and Technical Information of China (English)

    林梦冬; 余松煜

    2004-01-01

    In the field of predictive video coding and format conversion, there is an increasing attention towards estimation of the true inter-frame motion. The restoration of motion vector field computed by 3-D RS is addressed and a propagating adaptive-weighted vector median (PAWVM) post-filter is proposed. This approach decomposes blocks to make a betteres timation on object borders and propagates good vectors in the scanning direction. Furthermore, a hard-thresholding method is introduced into calculating vector weights to improve the propagating. By exploiting both the spatial correlation of the vector field and the matching error of candidate vectors, PAWVM makes a good balance between the smoothness of vector field and the prediction error, and the output vector field is more valid to reflect the true motion.

  13. Seismic Hazard Analysis Using the Adaptive Kernel Density Estimation Technique for Chennai City

    Science.gov (United States)

    Ramanna, C. K.; Dodagoudar, G. R.

    2012-01-01

    Conventional method of probabilistic seismic hazard analysis (PSHA) using the Cornell-McGuire approach requires identification of homogeneous source zones as the first step. This criterion brings along many issues and, hence, several alternative methods to hazard estimation have come up in the last few years. Methods such as zoneless or zone-free methods, modelling of earth's crust using numerical methods with finite element analysis, have been proposed. Delineating a homogeneous source zone in regions of distributed seismicity and/or diffused seismicity is rather a difficult task. In this study, the zone-free method using the adaptive kernel technique to hazard estimation is explored for regions having distributed and diffused seismicity. Chennai city is in such a region with low to moderate seismicity so it has been used as a case study. The adaptive kernel technique is statistically superior to the fixed kernel technique primarily because the bandwidth of the kernel is varied spatially depending on the clustering or sparseness of the epicentres. Although the fixed kernel technique has proven to work well in general density estimation cases, it fails to perform in the case of multimodal and long tail distributions. In such situations, the adaptive kernel technique serves the purpose and is more relevant in earthquake engineering as the activity rate probability density surface is multimodal in nature. The peak ground acceleration (PGA) obtained from all the three approaches (i.e., the Cornell-McGuire approach, fixed kernel and adaptive kernel techniques) for 10% probability of exceedance in 50 years is around 0.087 g. The uniform hazard spectra (UHS) are also provided for different structural periods.

  14. Adaptive fuzzy control with smooth inverse for nonlinear systems preceded by non-symmetric dead-zone

    Science.gov (United States)

    Wang, Xingjian; Wang, Shaoping

    2016-07-01

    In this study, the adaptive output feedback control problem of a class of nonlinear systems preceded by non-symmetric dead-zone is considered. To cope with the possible control signal chattering phenomenon which is caused by non-smooth dead-zone inverse, a new smooth inverse is proposed for non-symmetric dead-zone compensation. For the systematic design procedure of the adaptive fuzzy control algorithm, we combine the backstepping technique and small-gain approach. The Takagi-Sugeno fuzzy logic systems are used to approximate unknown system nonlinearities. The closed-loop stability is studied by using small gain theorem and the closed-loop system is proved to be semi-globally uniformly ultimately bounded. Simulation results indicate that, compared to the algorithm with the non-smooth inverse, the proposed control strategy can achieve better tracking performance and the chattering phenomenon can be avoided effectively.

  15. A Multimode Adaptive Pushover Procedure for Seismic Assessment of Integral Bridges

    Directory of Open Access Journals (Sweden)

    Ehsan Mohtashami

    2013-01-01

    Full Text Available This paper presents a new adaptive pushover procedure to account for the effect of higher modes in order to accurately estimate the seismic response of bridges. The effect of higher modes is considered by introducing a minimum value for the total effective modal mass. The proposed method employs enough number of modes to ensure that the defined total effective modal mass participates in all increments of the pushover loading. An adaptive demand curve is also developed for assessment of the seismic demand. The efficiency and robustness of the proposed method are demonstrated by conducting a parametric study. The analysis includes 18 four-span integral bridges with various heights of piers. The inelastic response history analysis is employed as reference solution in this study. Numerical results indicate excellent accuracy of the proposed method in assessment of the seismic response. For most bridges investigated in this study, the difference between the estimated response of the proposed method and the inelastic response history analysis is less than 25% for displacements and 10% for internal forces. This indicates a very good accuracy compared to available pushover procedures in the literature. The proposed method is therefore recommended to be applied to the seismic performance evaluation of integral bridges for engineering applications.

  16. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    Energy Technology Data Exchange (ETDEWEB)

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  17. A very low-cost and adaptable DIY seismic station

    Science.gov (United States)

    Mendez Chazara, Nahum; Castiñeiras, Pedro

    2016-04-01

    different configurations to fit different needs: From horizontal geophones, to the use of accelerometers to substitute the geophone and miniaturize even less the size of the seismic station. Also, the data can be gathered only by an Arduino board, but then it needs a card reader/writer and a real-time clock (RTC) circuit in order to correctly timestamp the data. In the first semester of 2016, we plan to build several units and deploy them in the field over the Bajo Segura Fault (Spain) and test them over different conditions to better assess the quality of the data.

  18. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2014-01-06

    A new hybrid adaptive MC forward Euler algorithm for SDEs with singular coefficients and non-smooth observables is developed. This adaptive method is based on the derivation of a new error expansion with computable leading order terms. When a non-smooth binary payoff is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the multilevel Monte Carlo (MLMC) forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case, it recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs, see [2]. The difficulty to extend Giles’ Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  19. Differential Mitochondrial Adaptation in Primary Vascular Smooth Muscle Cells from a Diabetic Rat Model.

    Science.gov (United States)

    Keller, Amy C; Knaub, Leslie A; McClatchey, P Mason; Connon, Chelsea A; Bouchard, Ron; Miller, Matthew W; Geary, Kate E; Walker, Lori A; Klemm, Dwight J; Reusch, Jane E B

    2016-01-01

    Diabetes affects more than 330 million people worldwide and causes elevated cardiovascular disease risk. Mitochondria are critical for vascular function, generate cellular reactive oxygen species (ROS), and are perturbed by diabetes, representing a novel target for therapeutics. We hypothesized that adaptive mitochondrial plasticity in response to nutrient stress would be impaired in diabetes cellular physiology via a nitric oxide synthase- (NOS-) mediated decrease in mitochondrial function. Primary smooth muscle cells (SMCs) from aorta of the nonobese, insulin resistant rat diabetes model Goto-Kakizaki (GK) and the Wistar control rat were exposed to high glucose (25 mM). At baseline, significantly greater nitric oxide evolution, ROS production, and respiratory control ratio (RCR) were observed in GK SMCs. Upon exposure to high glucose, expression of phosphorylated eNOS, uncoupled respiration, and expression of mitochondrial complexes I, II, III, and V were significantly decreased in GK SMCs (p < 0.05). Mitochondrial superoxide increased with high glucose in Wistar SMCs (p < 0.05) with no change in the GK beyond elevated baseline concentrations. Baseline comparisons show persistent metabolic perturbations in a diabetes phenotype. Overall, nutrient stress in GK SMCs caused a persistent decline in eNOS and mitochondrial function and disrupted mitochondrial plasticity, illustrating eNOS and mitochondria as potential therapeutic targets.

  20. Forecasting foreign exchange rates with an improved back-propagation learning algorithm with adaptive smoothing momentum terms

    Institute of Scientific and Technical Information of China (English)

    Lean YU; Shouyang WANG; Kin Keung LAI

    2009-01-01

    The slow convergence of back-propagation neu-ral network (BPNN) has become a challenge in data-mining and knowledge discovery applications due to the drawbacks of the gradient descent (GD) optimization method, which is widely adopted in BPNN learning. To solve this problem,some standard Optimization techniques such as conjugate-gradient and Newton method have been proposed to improve the convergence rate of BP learning algorithm. This paper presents a heuristic method that adds an adaptive smooth-ing momentum term to original BP learning algorithm to speedup the convergence. In this improved BP learning al-gorithm, adaptive smoothing technique is used to adjust the momentums of weight updating formula automatically in terms of "3 σ limits theory." Using the adaptive smoothing momentum terms, the improved BP learning algorithm can make the network training and convergence process faster,and the network's generalization performance stronger than the standard BP learning algorithm can do. In order to ver-ify the effectiveness of the proposed BP learning algorithm,three typical foreign exchange rates, British pound (GBP),Euro (EUR), and Japanese yen (JPY), are chosen as the fore-casting targets for illustration purpose. Experimental results from homogeneous algorithm comparisons reveal that the proposed BP learning algorithm outperforms the other com-parable BP algorithms in performance and convergence rate.Furthermore, empirical results from heterogeneous model comparisons also show the effectiveness of the proposed BP learning algorithm.

  1. Adaptation of a cubic smoothing spline algorithm for multi-channel data stitching at the National Ignition Facility

    Science.gov (United States)

    Brown, Charles G., Jr.; Adcock, Aaron B.; Azevedo, Stephen G.; Liebman, Judith A.; Bond, Essex J.

    2011-03-01

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, timevarying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  2. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C; Adcock, A; Azevedo, S; Liebman, J; Bond, E

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple data channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.

  3. A multilevel adaptive sparse grid stochastic collocation approach to the non-smooth forward propagation of uncertainty in discretized problems

    CERN Document Server

    Gates, Robert L

    2015-01-01

    This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method (ALSGC) with a hierarchy of successively finer spatial discretizations (e.g. finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC), which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. Preliminary results for a low-dimensional, non-smooth parametric ODE problem are promising: the proposed MLASGC method exhibits an error/cost-relation of $\\varepsilon \\sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\\varepsilon \\sim t^{-0.65}$) a...

  4. Smooth pursuit adaptation (SPA) exhibits features useful to compensate changes in the properties of the smooth pursuit eye movement system due to usage.

    Science.gov (United States)

    Dash, Suryadeep; Thier, Peter

    2013-01-01

    Smooth-pursuit adaptation (SPA) refers to the fact that pursuit gain in the early, still open-loop response phase of the pursuit eye movement can be adjusted based on experience. For instance, if the target moves initially at a constant velocity for ~100-200 ms and then steps to a higher velocity, subjects learn to up-regulate the pursuit gain associated with the initial target velocity (gain-increase SPA) in order to reduce the retinal error resulting from the velocity step. Correspondingly, a step to a lower target velocity leads to a decrease in gain (gain-decrease SPA). In this study we demonstrate that the increase in peak eye velocity during gain-increase SPA is a consequence of expanding the duration of the eye acceleration profile while the decrease in peak velocity during gain-decrease SPA results from reduced peak eye acceleration but unaltered duration. Furthermore, we show that carrying out stereotypical smooth pursuit eye movements elicited by constant velocity target ramps for several hundred trials (=test of pursuit resilience) leads to a clear drop in initial peak acceleration, a reflection of oculomotor and/or cognitive fatigue. However, this drop in acceleration gets compensated by an increase in the duration of the acceleration profile, thereby keeping initial pursuit gain constant. The compensatory expansion of the acceleration profile in the pursuit resilience experiment is reminiscent of the one leading to gain-increase SPA, suggesting that both processes tap one and the same neuronal mechanism warranting a precise acceleration-duration trade-off. Finally, we show that the ability to adjust acceleration duration during pursuit resilience depends on the integrity of the oculomotor vermis (OMV) as indicated by the complete loss of the duration adjustment following a surgical lesion of the OMV in one rhesus monkey we could study.

  5. Smooth pursuit adaptation (SPA exhibits features useful to compensate changes in the properties of the smooth pursuit eye movement system due to usage.

    Directory of Open Access Journals (Sweden)

    Suryadeep eDash

    2013-10-01

    Full Text Available Smooth-pursuit adaptation (SPA refers to the fact that pursuit gain in the early, still open-loop response phase of the pursuit eye movement can be adjusted based on experience. For instance, if the target moves initially at a constant velocity for approximately 100-200ms and then steps to a higher velocity, subjects learn to up-regulate the pursuit gain associated with the initial target velocity (gain-increase SPA in order to reduce the retinal error resulting from the velocity step. Correspondingly, a step to a lower target velocity leads to a decrease in gain (gain-decrease SPA. In this study we demonstrate that the increase in peak eye velocity during gain-increase SPA is a consequence of expanding the duration of the eye acceleration profile while the decrease in peak velocity during gain-decrease SPA results from reduced peak eye acceleration but unaltered duration. Furthermore, we show that carrying out stereotypical smooth pursuit eye movements elicited by constant velocity target ramps for several hundred trials (= test of pursuit resilience leads to a clear drop in initial peak acceleration, a reflection of oculomotor and/ or cognitive fatigue. However, this drop in acceleration gets compensated by an increase in the duration of the acceleration profile, thereby keeping initial pursuit gain constant. The compensatory expansion of the acceleration profile in the pursuit resilience experiment is reminiscent of the one leading to gain-increase SPA, suggesting that both processes tap one and the same neuronal mechanism warranting a precise acceleration/ duration trade-off. Finally, we show that the ability to adjust acceleration duration during pursuit resilience depends on the integrity of the oculomotor vermis (OMV as indicated by the complete loss of the duration adjustment following a surgical lesion of the OMV in one rhesus monkey we could study.

  6. Hybrid Adaptive Multilevel Monte Carlo Algorithm for Non-Smooth Observables of Itô Stochastic Differential Equations

    KAUST Repository

    Rached, Nadhir B.

    2013-12-01

    The Monte Carlo forward Euler method with uniform time stepping is the standard technique to compute an approximation of the expected payoff of a solution of an Itô SDE. For a given accuracy requirement TOL, the complexity of this technique for well behaved problems, that is the amount of computational work to solve the problem, is O(TOL-3). A new hybrid adaptive Monte Carlo forward Euler algorithm for SDEs with non-smooth coefficients and low regular observables is developed in this thesis. This adaptive method is based on the derivation of a new error expansion with computable leading-order terms. The basic idea of the new expansion is the use of a mixture of prior information to determine the weight functions and posterior information to compute the local error. In a number of numerical examples the superior efficiency of the hybrid adaptive algorithm over the standard uniform time stepping technique is verified. When a non-smooth binary payoff with either GBM or drift singularity type of SDEs is considered, the new adaptive method achieves the same complexity as the uniform discretization with smooth problems. Moreover, the new developed algorithm is extended to the MLMC forward Euler setting which reduces the complexity from O(TOL-3) to O(TOL-2(log(TOL))2). For the binary option case with the same type of Itô SDEs, the hybrid adaptive MLMC forward Euler recovers the standard multilevel computational cost O(TOL-2(log(TOL))2). When considering a higher order Milstein scheme, a similar complexity result was obtained by Giles using the uniform time stepping for one dimensional SDEs. The difficulty to extend Giles\\' Milstein MLMC method to the multidimensional case is an argument for the flexibility of our new constructed adaptive MLMC forward Euler method which can be easily adapted to this setting. Similarly, the expected complexity O(TOL-2(log(TOL))2) is reached for the multidimensional case and verified numerically.

  7. Adaptive Beam Smoothing with Plasma-Pinholes for Laser-Entrance-Hole Transmission Studies

    Science.gov (United States)

    Geissel, Matthias; Ruggles, Lawrence E.; Smith, Ian C.; Shores, Jonathn E.; Speas, C. Shane; Porter, John L.

    2014-10-01

    The concept of Magnetized Liner Inertial Fusion (MagLIF) requires the deposition of laser energy into a fuel-filled cylinder that is exposed to a magnetic field. To improve process, it is essential to optimize transmission through the foil covered laser entrance hole (LEH), which involves minimizing laser-plasma-instabilities (LPI). Laser beam smoothing is the most common approach to minimize LPI. It typically involves a Random-Phase-Plate (RPP) and smoothing by spectral dispersion (SSD). This approach can still cause LPI issues due to intensity ``hot-spots'' on a ps-time scale, and it inconveniently fixes the usable spot size. Changing laser spot sizes requires multiple dedicated RPPs. To study ideal spot sizes on a MagLIF LEH, the RPP/SSD approach gets cost prohibitive. As alternative, we use sacrificial thin foils (500 nm or less) at the laser focus, which instantly turn into a plasma-pinhole, acting as spatial filter. The smoothed laser spot size grows linearly with distance from best focus. We present experimental data for smoothing performance and resulting LEH transmission. Sandia is a multiprogram laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. DOE's Nat'l Nucl. Sec. Admin. under Contract DE-AC04-94AL85000.

  8. Flexible Joints Robotic Manipulator Control By Adaptive Gain Smooth Sliding Observer-Controller

    Directory of Open Access Journals (Sweden)

    A. FILIPESCU

    2003-12-01

    Full Text Available An adaptive gain sliding observer for uncertain parameter nonlinear systems together with an adaptive gain sliding controller is proposed in this paper. It considered nonlinear, SISO affine systems, with uncertainties in steady-state functions and parameters. A further parameter term, adaptively updated, has been introduced in steady state space model of the controlled system, in order to obtain useful information despite fault detection and isolation. By using of the sliding observer with adaptive gain, the robustness to uncertainties is increased and the parameters adaptively updated can provide useful information in fault detection. Also, the state estimation error is bounded accordingly with bound limits of the uncertainties. The both of them, the sliding adaptive observer and sliding controller are designed to fulfill the attractiveness condition of its corresponding switching surface. An application to a single arm with flexible joint robot is presented. In order to alleviate chattering, a parameterized tangent hyperbolic has been used as switching function, instead of pure relay one, to the observer and the controller. Also, the gains of the switching functions, to the sliding observer and sliding controller are adaptively updated depending of estimation error and tracking error, respectively. By the using adaptive gains, the transient and tracking response can be improved.

  9. Experiments on Adaptive Self-Tuning of Seismic Signal Detector Parameters

    Science.gov (United States)

    Knox, H. A.; Draelos, T.; Young, C. J.; Chael, E. P.; Peterson, M. G.; Lawry, B.; Phillips-Alonge, K. E.; Balch, R. S.; Ziegler, A.

    2016-12-01

    Scientific applications, including underground nuclear test monitoring and microseismic monitoring can benefit enormously from data-driven dynamic algorithms for tuning seismic and infrasound signal detection parameters since continuous streams are producing waveform archives on the order of 1TB per month. Tuning is a challenge because there are a large number of data processing parameters that interact in complex ways, and because the underlying populating of true signal detections is generally unknown. The largely manual process of identifying effective parameters, often performed only over a subset of stations over a short time period, is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. We present improvements to an Adaptive Self-Tuning algorithm for continuously adjusting detection parameters based on consistency with neighboring sensors. Results are shown for 1) data from a very dense network ( 120 stations, 10 km radius) deployed during 2008 on Erebus Volcano, Antarctica, and 2) data from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Performance is assessed in terms of missed detections and false detections relative to human analyst detections, simulated waveforms where ground-truth detections exist and visual inspection.

  10. Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation

    KAUST Repository

    Tan, Xiaosi

    2014-08-05

    Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.

  11. A Short Term Seismic Hazard Assessment in Christchurch, New Zealand, After the M 7.1, 4 September 2010 Darfield Earthquake: An Application of a Smoothing Kernel and Rate-and-State Friction Model

    Directory of Open Access Journals (Sweden)

    Chung-Han Chan

    2012-01-01

    Full Text Available The Mw 6.3, 21 February 2011 Christchurch, New Zealand, earthquake is regarded as an aftershock of the M 7.1, 4 September 2010 Darfield earthquake. However, it caused severe damage in the downtown Christchurch. Such a circumstance points out the importance of an aftershock sequence in seismic hazard evaluation and suggests the re-evaluation of a seismic hazard immediately after a large earthquake occurrence. For this purpose, we propose a probabilistic seismic hazard assessment (PSHA, which takes the disturbance of a short-term seismicity rate into account and can be easily applied in comparison with the classical PSHA. In our approach, the treatment of the background seismicity rate is the same as in the zoneless approach, which considers a bandwidth function as a smoothing Kernel in neighboring region of earthquakes. The rate-and-state friction model imparted by the Coulomb stress change of large earthquakes is used to calculate the fault-interaction-based disturbance in seismicity rate for PSHA. We apply this approach to evaluate the seismic hazard in Christchurch after the occurrence of the M 7.1, 4 September 2010 Darfield earthquake. Results show an increase of seismic hazards due to the stress increase in the region around the rupture plane, which extended to Christchurch. This provides a suitable basis for the application of a time-dependent PSHA using updating earthquake information.

  12. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  13. Local Adaptive Calibration of the GLASS Surface Incident Shortwave Radiation Product Using Smoothing Spline

    Science.gov (United States)

    Zhang, X.; Liang, S.; Wang, G.

    2015-12-01

    Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.

  14. Gravitational self-organizing map-based seismic image classification with an adaptive spectral-textural descriptor

    Science.gov (United States)

    Hao, Yanling; Sun, Genyun

    2016-10-01

    Seismic image classification is of vital importance for extracting damage information and evaluating disaster losses. With the increasing availability of high resolution remote sensing images, automatic image classification offers a unique opportunity to accommodate the rapid damage mapping requirements. However, the diversity of disaster types and the lack of uniform statistical characteristics in seismic images increase the complexity of automated image classification. This paper presents a novel automatic seismic image classification approach by integrating an adaptive spectral-textural descriptor into gravitational self-organizing map (gSOM). In this approach, seismic image is first segmented into several objects based on mean shift (MS) method. These objects are then characterized explicitly by spectral and textural feature quantization histograms. To objectify the image object delineation adapt to various disaster types, an adaptive spectral-textural descriptor is developed by integrating the histograms automatically. Subsequently, these objects as classification units are represented by neurons in a self-organizing map and clustered by adjacency gravitation. By moving the neurons around the gravitational space and merging them according to the gravitation, the object-based gSOM is able to find arbitrary shape and determine the class number automatically. Taking advantage of the diversity of gSOM results, consensus function is then conducted to discover the most suitable classification result. To confirm the validity of the presented approach, three aerial seismic images in Wenchuan covering several disaster types are utilized. The obtained quantitative and qualitative experimental results demonstrated the feasibility and accuracy of the proposed seismic image classification method.

  15. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  16. Adaptive Parametric Spectral Estimation with Kalman Smoothing for Online Early Seizure Detection

    Science.gov (United States)

    Park, Yun S.; Hochberg, Leigh R.; Eskandar, Emad N.; Cash, Sydney S.; Truccolo, Wilson

    2014-01-01

    Tracking spectral changes in neural signals, such as local field potentials (LFPs) and scalp or intracranial electroencephalograms (EEG, iEEG), is an important problem in early detection and prediction of seizures. Most approaches have focused on either parametric or nonparametric spectral estimation methods based on moving time windows. Here, we explore an adaptive (time-varying) parametric ARMA approach for tracking spectral changes in neural signals based on the fixed-interval Kalman smoother. We apply the method to seizure detection based on spectral features of intracortical LFPs recorded from a person with pharmacologically intractable focal epilepsy. We also devise and test an approach for real-time tracking of spectra based on the adaptive parametric method with the fixed-interval Kalman smoother. The order of ARMA models is determined via the AIC computed in moving time windows. We quantitatively demonstrate the advantages of using the adaptive parametric estimation method in seizure detection over nonparametric alternatives based exclusively on moving time windows. Overall, the adaptive parametric approach significantly improves the statistical separability of interictal and ictal epochs. PMID:24663686

  17. The smart cluster method. Adaptive earthquake cluster identification and analysis in strong seismic regions

    Science.gov (United States)

    Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann

    2017-07-01

    Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.

  18. The smart cluster method - Adaptive earthquake cluster identification and analysis in strong seismic regions

    Science.gov (United States)

    Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann

    2017-03-01

    Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.

  19. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bärbel

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  20. Equalizing resolution in smoothing particle hydrodynamics calculations using self-adaptive sinc kernels

    CERN Document Server

    García-Senz, Domingo; Escartín, José A; Ebinger, Kevin

    2014-01-01

    The smoothed particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems that has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much larger in the compressed regions than in the diluted zones of the fluid. In this work, we propose and check a scheme to balance and equalize the resolution of SPH between high and low density regions. This method relies in the versatility of a family of interpolators called Sinc kernels, which allows to increase the quality of interpolations just varying a single parameter (the exponent of the Sinc function). The scheme is checked and validated through a number of numerical tests, going from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities and the collapse of a sun-like polytrope. The analysis of the hydrodynamical simulations suggests that the scheme d...

  1. Adaptive smoothing of high angular resolution diffusion-weighted imaging data by generalized cross-validation improves Q-ball orientation distribution function reconstruction.

    Science.gov (United States)

    Metwalli, Nader S; Hu, Xiaoping P; Carew, John D

    2010-09-01

    Q-ball imaging (QBI) is a high angular resolution diffusion-weighted imaging (HARDI) technique for reconstructing the orientation distribution function (ODF). Some form of smoothing or regularization is typically required in the ODF reconstruction from low signal-to-noise ratio HARDI data. The amount of smoothing or regularization is usually set a priori at the discretion of the investigator. In this article, we apply an adaptive and objective means of smoothing the raw HARDI data using the smoothing splines on the sphere method with generalized cross-validation (GCV) to estimate the diffusivity profile in each voxel. Subsequently, we reconstruct the ODF, from the smoothed data, based on the Funk-Radon transform (FRT) used in QBI. The spline method was applied to both simulated data and in vivo human brain data. Simulated data show that the smoothing splines on the sphere method with GCV smoothing reduces the mean squared error in estimates of the ODF as compared with the standard analytical QBI approach. The human data demonstrate the utility of the method for estimating smooth ODFs.

  2. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method.

    Science.gov (United States)

    Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme

    2016-12-27

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.

  3. A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal

    CERN Document Server

    Pham, Mai Quyen; Chaux, Caroline; Pesquet, Jean-Christophe

    2014-01-01

    Unveiling meaningful geophysical information from seismic data requires to deal with both random and structured "noises". As their amplitude may be greater than signals of interest (primaries), additional prior information is especially important in performing efficient signal separation. We address here the problem of multiple reflections, caused by wave-field bouncing between layers. Since only approximate models of these phenomena are available, we propose a flexible framework for time-varying adaptive filtering of seismic signals, using sparse representations, based on inaccurate templates. We recast the joint estimation of adaptive filters and primaries in a new convex variational formulation. This approach allows us to incorporate plausible knowledge about noise statistics, data sparsity and slow filter variation in parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a constrained minimization problem that alleviates standard regularization issues in finding hyperparameters. Th...

  4. Improvement of Detection of Hypoattenuation in Acute Ischemic Stroke in Unenhanced Computed Tomography Using an Adaptive Smoothing Filter

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, N.; Lee, Y.; Tsai, D. Y.; Ishii, K.; Kinoshita, T.; Tamura, H.; Kimura, M. (Dept. of Radiological Technology, School of Health Sciences, Niigata Univ., Niigata (Japan))

    2008-09-15

    Background: Much attention has been directed toward identifying early signs of cerebral ischemia on computed tomography (CT) images. Hypoattenuation of ischemic brain parenchyma has been found to be the most frequent early sign. Purpose: To evaluate the effect of a previously proposed adaptive smoothing filter for improving detection of parenchymal hypoattenuation of acute ischemic stroke on unenhanced CT images. Material and Methods: Twenty-six patients with parenchymal hypoattenuation and 49 control subjects without hypoattenuation were retrospectively selected in this study. The adaptive partial median filter (APMF) designed for improving detectability of hypoattenuation areas on unenhanced CT images was applied. Seven radiologists, including four certified radiologists and three radiology residents, indicated their confidence level regarding the presence (or absence) of hypoattenuation on CT images, first without and then with the APMF processed images. Their performances without and with the APMF processed images were evaluated by receiver operating characteristic (ROC) analysis. Results: The mean areas under the ROC curves (AUC) for all observers increased from 0.875 to 0.929 (P=0.002) when the radiologists observed with the APMF processed images. The mean sensitivity in the detection of hypoattenuation significantly improved, from 69% (126 of 182 observations) to 89% (151 of 182 observations), when employing the APMF (P=0.012). The specificity, however, was unaffected by the APMF (P=0.41). Conclusion: The APMF has the potential to improve the detection of parenchymal hypoattenuation of acute ischemic stroke on unenhanced CT images

  5. A vermal Purkinje cell simple spike population response encodes the changes in eye movement kinematics due to smooth pursuit adaptation.

    Directory of Open Access Journals (Sweden)

    Suryadeep eDash

    2013-03-01

    Full Text Available Smooth pursuit adaptation (SPA is an example of cerebellum-dependent motor learning that depends on the integrity of the oculomotor vermis (OMV. In an attempt to unveil the neuronal basis of the role of the OMV in SPA, we recorded Purkinje cells simple spikes (PC SS of trained monkeys. Individual PC SS exhibited specific changes of their discharge patterns during the course of SPA. However, these individual changes did not provide a reliable explanation of the behavioural changes. On the other hand, the population response of PC SS perfectly reflected the changes resulting from adaptation. Population vector was calculated using all cells recorded independent of their location. A population code conveying the behavioural changes is in full accordance with the anatomical convergence of PC axons on target neurons in the cerebellar nuclei. Its computational advantage is the ease with which it can be adjusted to the needs of the behavior by changing the contribution of individual PC SS based on error feedback.

  6. A Fast Block-Matching Algorithm Using Smooth Motion Vector Field Adaptive Search Technique

    Institute of Scientific and Technical Information of China (English)

    LI Bo(李波); LI Wei(李炜); TU YaMing(涂亚明)

    2003-01-01

    In many video standards based on inter-frame compression such as H.26x and MPEG, block-matching algorithm has been widely adopted as the method for motion estimation because of its simplicity and effectiveness. Nevertheless, since motion estimation is very complex in computing. Fast algorithm for motion estimation has always been an important and attractive topic in video compression. From the viewpoint of making motion vector field smoother, this paper proposes a new algorithm SMVFAST. On the basis of motion correlation, it predicts the starting point by neighboring motion vectors according to their SADs. Adaptive search modes are usedin its search process through simply classifying motion activity. After discovering the ubiquitous ratio between the SADs of the collocated blocks in the consecutive frames, the paper proposes an effective half-stop criterion that can quickly stop the search process with good enough results.Experiments show that SMVFAST obtains almost the same results as the full search at very low computation cost, and outperforms MVFAST and PMVFAST in speed and quality, which are adopted by MPEG-4.

  7. Detectability improvement of early sign of acute stroke on brain CT images using an adaptive partial smoothing filter

    Science.gov (United States)

    Lee, Yongbum; Takahashi, Noriyuki; Tsai, Du-Yih; Fujita, Hiroshi

    2006-03-01

    Detection of early infarct signs on non-enhanced CT is mandatory in patients with acute ischemic stroke. We present a method for improving the detectability of early infarct signs of acute ischemic stroke. This approach is considered as the first step for computer-aided diagnosis in acute ischemic stroke. Obscuration of the gray-white matter interface at the lentiform nucleus or the insular ribbon has been an important early infarct sign, which affects decisions on thrombolytic therapy. However, its detection is difficult, since the early infarct sign is subtle hypoattenuation. In order to improve the detectability of the early infarct sign, an image processing being able to reduce local noise with edges preserved is desirable. To cope with this issue, we devised an adaptive partial smoothing filter (APSF). Because the APSF can markedly improve the visibility of the normal gray-white matter interface, the detection of conspicuity of obscuration of gray-white matter interface due to hypoattenuation could be increased. The APSF is a specifically designed filter used to perform local smoothing using a variable filter size determined by the distribution of pixel values of edges in the region of interest. By adjusting four parameters of the APSF, an optimal condition for image enhancement can be obtained. In order to determine a major one of the parameters, preliminary simulation was performed by using composite images simulated the gray-white matter. The APSF based on preliminary simulation was applied to several clinical CT scans in hyperacute stroke patients. The results showed that the detectability of early infarct signs is much improved.

  8. Climate services for adapting landslide hazard prevention measures in the Vrancea Seismic Region

    Science.gov (United States)

    Micu, Dana; Balteanu, Dan; Jurchescu, Marta; Sima, Mihaela; Micu, Mihai

    2014-05-01

    The Vrancea Seismic Region is covering an area of about 8 000 km2 in the Romanian Curvature Carpathians and Subcarpathians and it is considered one of Europe's most intensely multi-hazard-affected areas. Due to its geomorphic traits (heterogeneous morphostructural units of flysch mountains and molasse hills and depressions), the area is strongly impacted by extreme hydro-meteorological events which are potentially enhancing the numerous damages inflicted to a dense network of human settlements. An a priori knowledge of future climate change is a useful climate service for local authorities to develop regional adapting strategies and adequate prevention/preparedness frameworks. This paper aims at integrating the results of the high-resolution climate projections over the 21st century (within the FP7 ECLISE project) into the regional landslide hazard assessment. The requirements of users (Civil Protection, Land management, local authorities) for this area refer to reliable and high-resolution spatial data on landslide and flood hazard for short and medium-term risk management strategies. An insight into the future behavior of climate variability in the Vrancea Seismic Region, based on future climate projections of three regional models, under three RCPs (2.6, 4.5, 8.6), suggests a clear warming, both annually and seasonally and a rather limited annual precipitation decrease, but with a strong change of seasonality. A landslide inventory of 2485 cases (shallow and medium seated earth, debris and rock slides and earth and debris flows) was obtained based on large scale geomorphological mapping and aerial photos support (GeoEye, DigitalGlobe; provided by GoogleEarth and BingMaps). The landslides are uniformly distributed across the area, being considered representative for the entire morphostructural environment. Landslide susceptibility map was obtained using multivariate statistical analysis (logistic regression), while a relative landslide hazard index was computed

  9. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    Science.gov (United States)

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  10. Relative seismic shaking vulnerability microzonation using an adaptation of the Nakamura Horizontal to Vertical Spectral Ratio Method

    Indian Academy of Sciences (India)

    Michael L Turnbull

    2008-11-01

    An alternative seismic shaking vulnerability survey method to computational intensive theoretical modelling of site response to earthquake, and time consuming test versus reference site horizontal ratio methods, is described. The methodology is suitable for small to large scale engineering investigations. Relative seismic shaking vulnerability microzonation using an adaptation of the Nakamura horizontal to vertical spectral ratio method provides many advantages over alternative methods including: low cost; rapid field phase (100 km2 can easily be covered by a single operator in 5 days); low and flexible instrumentation requirements (a single seismometer and data logger of almost any type is required); field data can be collected at any time during the day or night (the results are insensitive to ambient social noise); no basement rock reference site is required (thus eliminating trigger synchronisation between reference and multiple test site seismographs); rapid software aided analysis; insensitivity to ground-shaking resonance peaks; ability to compare results obtained from non-contiguous survey fields. The methodology is described in detail, and a practical case study is provided, including mapped results. The resulting microzonation maps indicate the relative seismic shaking vulnerability for built structures of different height categories within adjacent zones, with a resolution of approximately 1 km.

  11. Highly-optimized TWSM software package for seismic diffraction modeling adapted for GPU-cluster

    Science.gov (United States)

    Zyatkov, Nikolay; Ayzenberg, Alena; Aizenberg, Arkady

    2015-04-01

    Oil producing companies concern to increase resolution capability of seismic data for complex oil-and-gas bearing deposits connected with salt domes, basalt traps, reefs, lenses, etc. Known methods of seismic wave theory define shape of hydrocarbon accumulation with nonsufficient resolution, since they do not account for multiple diffractions explicitly. We elaborate alternative seismic wave theory in terms of operators of propagation in layers and reflection-transmission at curved interfaces. Approximation of this theory is realized in the seismic frequency range as the Tip-Wave Superposition Method (TWSM). TWSM based on the operator theory allows to evaluate of wavefield in bounded domains/layers with geometrical shadow zones (in nature it can be: salt domes, basalt traps, reefs, lenses, etc.) accounting for so-called cascade diffraction. Cascade diffraction includes edge waves from sharp edges, creeping waves near concave parts of interfaces, waves of the whispering galleries near convex parts of interfaces, etc. The basic algorithm of TWSM package is based on multiplication of large-size matrices (make hundreds of terabytes in size). We use advanced information technologies for effective realization of numerical procedures of the TWSM. In particular, we actively use NVIDIA CUDA technology and GPU accelerators allowing to significantly improve the performance of the TWSM software package, that is important in using it for direct and inverse problems. The accuracy, stability and efficiency of the algorithm are justified by numerical examples with curved interfaces. TWSM package and its separate components can be used in different modeling tasks such as planning of acquisition systems, physical interpretation of laboratory modeling, modeling of individual waves of different types and in some inverse tasks such as imaging in case of laterally inhomogeneous overburden, AVO inversion.

  12. 提高台站地震资料信噪比的自适应极化滤波%Adaptive Polarization Filtering for Improving the S/N of Station Seismic Data

    Institute of Scientific and Technical Information of China (English)

    马见青; 李庆春

    2014-01-01

    length window do not have time-varying characteristics.The filtering results of the SCM are relatively stable,insensitive to disturbance,and unable to determine the polarization parameters in the beginning and end of the seismic record.Thus,the filtering effect is not ideal and will inevitably appear glossy in interpretation.For this reason,the present study in-troduces a new polarization method based on the adaptive covariance matrix(ACM).We use an approximate formula to compute the adaptive window function,in which the length is adapted to the instantaneous frequency of three-component seismic data.In particular,the window length of the covariance matrix adjusts to the minimum cycle of the desired signal,which reduces the facti-tious impact of the window length selection.In addition,there is no need to interpolate processing because the polarization parameters are computed in every time sampling point of the three-com-ponent seismic records,except for one-half of the time window of the start and end points.Due to the above advantages,ACM greatly improves the filtering accuracy.The processing results of the model and actual three-component station seismic data show that the SCM represents a smoothed version of the instantaneous attributes from the ACM.Furthermore,because the time window is fixed for the standard method,it is not possible to characterize the polarization attributes of a seis-mic event with a period lower than that of the time window used for the analysis.The polarization curves computed by SCM and ACM agree quite well in the region in which the period of the domi-nant signal is close to the time window selected for the covariance analysis,which greatly reduces the effective signal waveform differences before and after filtering.With ACM,a comparison of the original signal and that after filtering reveals that almost no high-frequency interference exists near the effective signal.In general,the significantly different regions in which the two curves from

  13. Smooth magnetogenesis

    CERN Document Server

    Campanelli, L

    2016-01-01

    In the Ratra scenario of inflationary magnetogenesis, the kinematic coupling between the photon and the inflaton undergoes a nonanalytical jump at the end of inflation. Using smooth interpolating analytical forms of the coupling function, we show that such unphysical jump does not invalidate the main prediction of the model, which still represents a viable mechanism for explaining cosmic magnetization. Nevertheless, there is a spurious result associated with the nonanaliticity of the coupling, to wit, the prediction that the spectrum of created photons has a power-law decay in the ultraviolet regime. This issue is discussed using both semiclassical approximation and smooth coupling functions.

  14. The spatial data-adaptive minimum-variance distortionless-response beamformer on seismic single-sensor data

    NARCIS (Netherlands)

    Panea, I.; Drijkoningen, G.G.

    2008-01-01

    Coherent noise generated by surface waves or ground roll within a heterogeneous near surface is a major problem in land seismic data. Array forming based on single-sensor recordings might reduce such noise more robustly than conventional hardwired arrays. We use the minimum-variance

  15. Smoothed Invariants

    CERN Document Server

    Dye, H A

    2011-01-01

    We construct two knot invariants. The first knot invariant is a sum constructed using linking numbers. The second is an invariant of flat knots and is a formal sum of flat knots obtained by smoothing pairs of crossings. This invariant can be used in conjunction with other flat invariants, forming a family of invariants. Both invariants are constructed using the parity of a crossing.

  16. Seismic Symphonies

    Science.gov (United States)

    Strinna, Elisa; Ferrari, Graziano

    2015-04-01

    The project started in 2008 as a sound installation, a collaboration between an artist, a barrel organ builder and a seismologist. The work differs from other attempts of sound transposition of seismic records. In this case seismic frequencies are not converted automatically into the "sound of the earthquake." However, it has been studied a musical translation system that, based on the organ tonal scale, generates a totally unexpected sequence of sounds which is intended to evoke the emotions aroused by the earthquake. The symphonies proposed in the project have somewhat peculiar origins: they in fact come to life from the translation of graphic tracks into a sound track. The graphic tracks in question are made up by copies of seismograms recorded during some earthquakes that have taken place around the world. Seismograms are translated into music by a sculpture-instrument, half a seismograph and half a barrel organ. The organ plays through holes practiced on paper. Adapting the documents to the instrument score, holes have been drilled on the waves' peaks. The organ covers about three tonal scales, starting from heavy and deep sounds it reaches up to high and jarring notes. The translation of the seismic records is based on a criterion that does match the highest sounds to larger amplitudes with lower ones to minors. Translating the seismogram in the organ score, the larger the amplitude of recorded waves, the more the seismogram covers the full tonal scale played by the barrel organ and the notes arouse an intense emotional response in the listener. Elisa Strinna's Seismic Symphonies installation becomes an unprecedented tool for emotional involvement, through which can be revived the memory of the greatest disasters of over a century of seismic history of the Earth. A bridge between art and science. Seismic Symphonies is also a symbolic inversion: the instrument of the organ is most commonly used in churches, and its sounds are derived from the heavens and

  17. Seismic Technology Adapted to Analyzing and Developing Geothermal Systems Below Surface-Exposed High-Velocity Rocks Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Hardage, Bob A. [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; DeAngelo, Michael V. [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Ermolaeva, Elena [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Hardage, Bob A. [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Remington, Randy [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Sava, Diana [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Wagner, Donald [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology; Wei, Shuijion [Univ. of Texas, Austin, TX (United States). Bureau of Economic Geology

    2013-02-01

    The objective of our research was to develop and demonstrate seismic data-acquisition and data-processing technologies that allow geothermal prospects below high-velocity rock outcrops to be evaluated. To do this, we acquired a 3-component seismic test line across an area of exposed high-velocity rocks in Brewster County, Texas, where there is high heat flow and surface conditions mimic those found at numerous geothermal prospects. Seismic contractors have not succeeded in creating good-quality seismic data in this area for companies who have acquired data for oil and gas exploitation purposes. Our test profile traversed an area where high-velocity rocks and low-velocity sediment were exposed on the surface in alternating patterns that repeated along the test line. We verified that these surface conditions cause non-ending reverberations of Love waves, Rayleigh waves, and shallow critical refractions to travel across the earth surface between the boundaries of the fast-velocity and slow-velocity material exposed on the surface. These reverberating surface waves form the high level of noise in this area that does not allow reflections from deep interfaces to be seen and utilized. Our data-acquisition method of deploying a box array of closely spaced geophones allowed us to recognize and evaluate these surface-wave noise modes regardless of the azimuth direction to the surface anomaly that backscattered the waves and caused them to return to the test-line profile. With this knowledge of the surface-wave noise, we were able to process these test-line data to create P-P and SH-SH images that were superior to those produced by a skilled seismic data-processing contractor. Compared to the P-P data acquired along the test line, the SH-SH data provided a better detection of faults and could be used to trace these faults upward to the boundaries of exposed surface rocks. We expanded our comparison of the relative value of S-wave and P-wave seismic data for geothermal

  18. Methods for Estimating Mean Annual Rate of Earthquakes in Moderate and Low Seismicity Regions~

    Institute of Scientific and Technical Information of China (English)

    Peng Yanju; Zhang Lifang; Lv Yuejun; Xie Zhuojuan

    2012-01-01

    Two kinds of methods for determining seismic parameters are presented, that is, the potential seismic source zoning method and grid-spatially smoothing method. The Gaussian smoothing method and the modified Gaussian smoothing method are described in detail, and a comprehensive analysis of the advantages and disadvantages of these methods is made. Then, we take centrai China as the study region, and use the Gaussian smoothing method and potential seismic source zoning method to build seismic models to calculate the mean annual seismic rate. Seismic hazard is calculated using the probabilistic seismic hazard analysis method to construct the ground motion acceleration zoning maps. The differences between the maps and these models are discussed and the causes are investigated. The results show that the spatial smoothing method is suitable for estimating the seismic hazard over the moderate and low seismicity regions or the hazard caused by background seismicity; while the potential seismic source zoning method is suitable for estimating the seismic hazard in well-defined seismotectonics. Combining the spatial smoothing method and the potential seismic source zoning method with an integrated account of the seismicity and known seismotectonics is a feasible approach to estimate the seismic hazard in moderate and low seismicity regions.

  19. Reproducibility in Seismic Imaging

    Directory of Open Access Journals (Sweden)

    González-Verdejo O.

    2012-04-01

    Full Text Available Within the field of exploration seismology, there is interest at national level of integrating reproducibility in applied, educational and research activities related to seismic processing and imaging. This reproducibility implies the description and organization of the elements involved in numerical experiments. Thus, a researcher, teacher or student can study, verify, repeat, and modify them independently. In this work, we document and adapt reproducibility in seismic processing and imaging to spread this concept and its benefits, and to encourage the use of open source software in this area within our academic and professional environment. We present an enhanced seismic imaging example, of interest in both academic and professional environments, using Mexican seismic data. As a result of this research, we prove that it is possible to assimilate, adapt and transfer technology at low cost, using open source software and following a reproducible research scheme.

  20. Robust Detection and Classification of Regional Seismic Signals Using a Two Mode/Two Stage Cascaded Adaptive Arma (CAARMA) Model

    Science.gov (United States)

    1985-03-01

    adaptive algorithms presented earlier, we will employ the SHARF algorithm. The analysis by * Ljuig [30-31] provides a convergence proof for the RLMS...algo- rithm. Since SHARF is not a gradient search algorithm, the convergence proof relies upon the concept of hyperstability [32-361. A direct form...realization of the transfer function -A A -N B bz + + bNZ A B(z) 0 1 N H(z) = = -_ -I a -N 3.45 a(z 1z a . N z is utilized by both the RLMS and SHARF

  1. An adaptive noise attenuation method for edge and amplitude preservation

    Institute of Scientific and Technical Information of China (English)

    Cai Han-Peng; He Zhen-Hua; Li Ya-Lin; He Guang-Ming; Zou Wen; Zhang Dong-Jun; Liu Pu

    2014-01-01

    Noise intensity distributed in seismic data varies with different frequencies or frequency bands; thus, noise attenuation on the full-frequency band affects the dynamic properties of the seismic reflection signal and the subsequent seismic data interpretation, reservoir description, hydrocarbon detection, etc. Hence, we propose an adaptive noise attenuation method for edge and amplitude preservation, wherein the wavelet packet transform is used to decompose the full-band seismic signal into multiband data and then process these data using nonlinear anisotropic dip-oriented edge-preservingfi ltering. In the fi ltering, the calculated diffusion tensor from the structure tensor can be exploited to establish the direction of smoothing. In addition, the fault confidence measure and discontinuity operator can be used to preserve the structural and stratigraphic discontinuities and edges, and the decorrelation criteria can be used to establish the number of iterations. These parameters can minimize the intervention and subjectivity of the interpreter, and simplify the application of the proposed method. We applied the proposed method to synthetic and real 3D marine seismic data. We found that the proposed method could be used to attenuate noise in seismic data while preserving the effective discontinuity information and amplitude characteristics in seismic refl ection waves, providing high-quality data for interpretation and analysis such as high-resolution processing, attribute analysis, and inversion.

  2. Relative Smooth Topological Spaces

    Directory of Open Access Journals (Sweden)

    B. Ghazanfari

    2009-01-01

    Full Text Available In 1992, Ramadan introduced the concept of a smooth topological space and relativeness between smooth topological space and fuzzy topological space in Chang's (1968 view points. In this paper we give a new definition of smooth topological space. This definition can be considered as a generalization of the smooth topological space which was given by Ramadan. Some general properties such as relative smooth continuity and relative smooth compactness are studied.

  3. Smooth Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    M. K. EL Gayyar

    2016-08-01

    Full Text Available As a new branch of philosophy, the neutrosophy was presented by Smarandache in 1980. It was presented as the study of origin, nature, and scope of neutralities; as well as their interactions with different ideational spectra. The aim in this paper is to introduce the concepts of smooth neutrosophic topological space, smooth neutrosophic cotopological space, smooth neutrosophic closure, and smooth neutrosophic interior. Furthermore, some properties of these concepts will be investigated.

  4. Smooth Neutrosophic Topological Spaces

    OpenAIRE

    M. K. EL GAYYAR

    2016-01-01

    As a new branch of philosophy, the neutrosophy was presented by Smarandache in 1980. It was presented as the study of origin, nature, and scope of neutralities; as well as their interactions with different ideational spectra. The aim in this paper is to introduce the concepts of smooth neutrosophic topological space, smooth neutrosophic cotopological space, smooth neutrosophic closure, and smooth neutrosophic interior. Furthermore, some properties of these concepts will be investigated.

  5. 基于自适应衰减因子Kalman滤波的GPS相位平滑伪距算法%Algorithm of GPS phase smoothing pseudo-range based on adaptive attenuation factor Kalman filtering

    Institute of Scientific and Technical Information of China (English)

    崔法毅; 解文肖

    2015-01-01

    载波相位平滑伪距的主要目的是通过高精度的载波相位测量值作为辅助量,使伪距测量值中、大随机误差得以消减。针对GPS伪距测量中未知时变的噪声,提出基于极大后验时变噪声统计估计器的自适应衰减因子Kalman滤波算法(AFKF),采用衰减的加权因子,使估计器逐渐忘记陈旧数据的作用,同时增加新数据的比重,避免滤波过程的发散。结合载波相位平滑伪距原理,利用AFKF算法对全球导航卫星系统(GNSS)的国际 GNSS 服务组织(IGS)的跟踪站实测数据进行仿真分析,并提出利用伪距双差值及伪距三差值来直观体现不同算法的效果比较,结果表明:与标准Kalman滤波相比,AFKF算法在伪距平滑应用中取得很好的效果。%The main purpose of carrier phase smoothing pseudo- range is to reduce large random error of pseudo- range measurement values, by using high- precision carrier phase measurement values as the supplementary information. In view of the unknown time - varying noise in GPS pseudo - range measurement, an algorithm of adaptive attenuation factor kalman filter (AFKF) was put forward, which was based on maximum a posteriori (MAP) time- varying noise statistical estimator. In order to avoid the divergence of filtering process, the effect of old data could be gradually forgotten by using estimator with attenuation weighted factors, while the proportion of new data could be increased. Simulation analysis was carried out on the measured data of tracking station of a International Global Navigation Satellite System Service (IGS), by using the AFKF algorithm combining with carrier phase smoothing pseudo- range principle. And the double differential and the three differential pseudo- ranges were proposed to intuitively reflect the effects of different algorithms. Experimental results show that the AFKF algorithm can obtain better effect in application of pseudo- range smoothing, compared

  6. Validating induced seismicity forecast models - Induced Seismicity Test Bench

    CERN Document Server

    Kiraly-Proag, Eszter; Gischig, Valentin; Wiemer, Stefan; Karvounis, Dimitrios; Doetsch, Joseph

    2016-01-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. In this study, we propose an Induced Seismicity Test Bench to test and rank such models; this test bench can be used for model development, model selection, and ensemble model building. We apply the test bench to data from the Basel 2006 and Soultz-sous-For\\^ets 2004 geothermal stimulation projects, and we assess forecasts from two models: Shapiro and Smoothed Seismicity (SaSS) and Hydraulics and Seismics (HySei). These models incorporate a different mix of physics-based elements and stochastic representation of the induced sequences. Our results show that neither model is fully superior to the other. Generally, HySei forecasts the seismicity rate better after shut-in, but is only mediocre at forecasting the spatial distri...

  7. Smooth K-Theory

    CERN Document Server

    Bunke, Ulrich

    2007-01-01

    We construct an analytic multiplicative model of smooth K-theory. We further introduce the notion of a smooth K-orientation of a proper submersion and define the associated push-forward which satisfies functoriality, compatibility with pull-back diagrams, and projection and bordism formulas. We construct a multiplicative lift of the Chern character from smooth K-theory to smooth rational cohomology and verify that the cohomological version of the Atiyah-Singer index theorem for families lifts to smooth cohomology.

  8. Seismic Creep

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Seismic creep is the constant or periodic movement on a fault as contrasted with the sudden erupture associated with an earthquake. It is a usually slow deformation...

  9. Seismic seiches

    Science.gov (United States)

    McGarr, Arthur; Gupta, Harsh K.

    2011-01-01

    Seismic seiche is a term first used by Kvale (1955) to discuss oscillations of lake levels in Norway and England caused by the Assam earthquake of August 15, 1950. This definition has since been generalized to apply to standing waves set up in closed, or partially closed, bodies of water including rivers, shipping channels, lakes, swimming pools and tanks due to the passage of seismic waves from an earthquake.

  10. Seismic Studies

    Energy Technology Data Exchange (ETDEWEB)

    R. Quittmeyer

    2006-09-25

    This technical work plan (TWP) describes the efforts to develop and confirm seismic ground motion inputs used for preclosure design and probabilistic safety 'analyses and to assess the postclosure performance of a repository at Yucca Mountain, Nevada. As part of the effort to develop seismic inputs, the TWP covers testing and analyses that provide the technical basis for inputs to the seismic ground-motion site-response model. The TWP also addresses preparation of a seismic methodology report for submission to the U.S. Nuclear Regulatory Commission (NRC). The activities discussed in this TWP are planned for fiscal years (FY) 2006 through 2008. Some of the work enhances the technical basis for previously developed seismic inputs and reduces uncertainties and conservatism used in previous analyses and modeling. These activities support the defense of a license application. Other activities provide new results that will support development of the preclosure, safety case; these results directly support and will be included in the license application. Table 1 indicates which activities support the license application and which support licensing defense. The activities are listed in Section 1.2; the methods and approaches used to implement them are discussed in more detail in Section 2.2. Technical and performance objectives of this work scope are: (1) For annual ground motion exceedance probabilities appropriate for preclosure design analyses, provide site-specific seismic design acceleration response spectra for a range of damping values; strain-compatible soil properties; peak motions, strains, and curvatures as a function of depth; and time histories (acceleration, velocity, and displacement). Provide seismic design inputs for the waste emplacement level and for surface sites. Results should be consistent with the probabilistic seismic hazard analysis (PSHA) for Yucca Mountain and reflect, as appropriate, available knowledge on the limits to extreme ground

  11. 一种改进的用于城市主干道行驶时间短时预测的自适应指数平滑(IAES)模型%An Improved Adaptive Exponential Smoothing Model for Short-term Travel Time Forecasting of Urban Arterial Street

    Institute of Scientific and Technical Information of China (English)

    李志鹏; 虞鸿; 刘允才; 刘富强

    2008-01-01

    Short-term forecasting of travel time is essential for the success of intelligent transportation system. In this paper, we review the state-of-art of short-term traffic forecasting models and outline their basic ideas, related works, advantages and disadvantages of each model. An improved adaptive exponential smoothing (IAES) model is also proposed to overcome the drawbacks of the previous adaptive exponential smoothing model. Then, comparing experiments are carried out under normal traffic condition and abnormal traffic condition to evaluate the performance of four main branches of forecasting models on direct travel time data obtained by license plate matching (LPM). The results of experiments show each model seems to have its own strength and weakness. The forecasting performance of IASE is superior to other models in shorter forecasting horizon (one and two step forecasting) and the IASE is capable of dealing with all kind of traffic conditions.

  12. Smooth sandwich gravitational waves

    CERN Document Server

    Podolsky, J

    1999-01-01

    Gravitational waves which are smooth and contain two asymptotically flat regions are constructed from the homogeneous pp-waves vacuum solution. Motion of free test particles is calculated explicitly and the limit to an impulsive wave is also considered.

  13. smooth-muscle activity

    African Journals Online (AJOL)

    with atropine could not abolish the effect of the venom on smooth muscle. ... cholenergic factor with acetylcholine was confirmed using radioimmunoassay of ... peripheral nervous antagonists on the venom action are still uncertain. The present.

  14. Nonequilibrium Flows with Smooth Particle Applied Mechanics.

    Science.gov (United States)

    Kum, Oyeon

    Smooth particle methods are relatively new methods for simulating solid and fluid flows though they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expressions for (urho) and (Trho), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier's heat -flow law and Newton's viscous force law are used. Smooth particle methods show an interesting parallel linking them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh -Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails. Considerably fewer smooth particles are required than atoms in a corresponding molecular dynamics

  15. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  16. USGS National Seismic Hazard Maps

    Science.gov (United States)

    Frankel, A.D.; Mueller, C.S.; Barnhard, T.P.; Leyendecker, E.V.; Wesson, R.L.; Harmsen, S.C.; Klein, F.W.; Perkins, D.M.; Dickman, N.C.; Hanson, S.L.; Hopper, M.G.

    2000-01-01

    The U.S. Geological Survey (USGS) recently completed new probabilistic seismic hazard maps for the United States, including Alaska and Hawaii. These hazard maps form the basis of the probabilistic component of the design maps used in the 1997 edition of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, prepared by the Building Seismic Safety Council arid published by FEMA. The hazard maps depict peak horizontal ground acceleration and spectral response at 0.2, 0.3, and 1.0 sec periods, with 10%, 5%, and 2% probabilities of exceedance in 50 years, corresponding to return times of about 500, 1000, and 2500 years, respectively. In this paper we outline the methodology used to construct the hazard maps. There are three basic components to the maps. First, we use spatially smoothed historic seismicity as one portion of the hazard calculation. In this model, we apply the general observation that moderate and large earthquakes tend to occur near areas of previous small or moderate events, with some notable exceptions. Second, we consider large background source zones based on broad geologic criteria to quantify hazard in areas with little or no historic seismicity, but with the potential for generating large events. Third, we include the hazard from specific fault sources. We use about 450 faults in the western United States (WUS) and derive recurrence times from either geologic slip rates or the dating of pre-historic earthquakes from trenching of faults or other paleoseismic methods. Recurrence estimates for large earthquakes in New Madrid and Charleston, South Carolina, were taken from recent paleoliquefaction studies. We used logic trees to incorporate different seismicity models, fault recurrence models, Cascadia great earthquake scenarios, and ground-motion attenuation relations. We present disaggregation plots showing the contribution to hazard at four cities from potential earthquakes with various magnitudes and

  17. Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench

    Science.gov (United States)

    Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan

    2016-04-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.

  18. Acquired smooth muscle hamartoma

    Directory of Open Access Journals (Sweden)

    Bari Arfan ul

    2006-01-01

    Full Text Available Smooth muscle hamartoma is an uncommon, usually congenital, cutaneous hyperplasia of the arrectores pilorum muscles. When it is acquired, it may be confused with Becker′s nevus. We report a case of this rare tumor in a 19-year-old man. The disease started several years ago as multiple small skin-colored papules that subsequently coalesced to form a large soft plaque on the back of the left shoulder. The diagnosis of acquired smooth muscle hamartoma was confirmed on histopathology. The patient was reassured about the benign nature of the lesion and was not advised any treatment.

  19. Revealed smooth nontransitive preferences

    DEFF Research Database (Denmark)

    Keiding, Hans; Tvede, Mich

    2013-01-01

    consumption bundle, all strictly preferred bundles are more expensive than the observed bundle. Our main result is that data sets can be rationalized by a smooth nontransitive preference relation if and only if prices can normalized such that the law of demand is satisfied. Market data sets consist of finitely...... many observations of price vectors, lists of individual incomes and aggregate demands. We apply our main result to characterize market data sets consistent with equilibrium behaviour of pure-exchange economies with smooth nontransitive consumers....

  20. Innovations in seismic tomography, their applications and induced seismic events in carbon sequestration

    Science.gov (United States)

    Li, Peng

    This dissertation presents two innovations in seismic tomography and a new discovery of induced seismic events associated with CO2 injection at an Enhanced Oil Recovery (EOR) site. The following are brief introductions of these three works. The first innovated work is adaptive ambient seismic noise tomography (AANT). Traditional ambient noise tomography methods using regular grid nodes are often ill posed because the inversion grids do not always represent the distribution of ray paths. Large grid spacing is usually used to reduce the number of inversion parameters, which may not be able to solve for small-scale velocity structure. We present a new adaptive tomography method with irregular grids that provides a few advantages over the traditional methods. First, irregular grids with different sizes and shapes can fit the ray distribution better and the traditionally ill-posed problem can become more stable owing to the different parameterizations. Second, the data in the area with dense ray sampling will be sufficiently utilized so that the model resolution can be greatly improved. Both synthetic and real data are used to test the newly developed tomography algorithm. In synthetic data tests, we compare the resolution and stability of the traditional and adaptive methods. The results show that adaptive tomography is more stable and performs better in improving the resolution in the area with dense ray sampling. For real data, we extract the ambient noise signals of the seismic data near the Garlock Fault region, obtained from the Southern California Earthquake Data Center. The resulting group velocity of Rayleigh waves is well correlated with the geological structures. High velocity anomalies are shown in the cold southern Sierra Nevada, the Tehachapi Mountains and the Western San Gabriel Mountains. The second innovated work is local earthquake tomography with full topography (LETFT). In this work, we develop a new three-dimensional local earthquake tomography

  1. Seismic hazard estimation based on the distributed seismicity in northern China

    Institute of Scientific and Technical Information of China (English)

    YANG Yong; SHI Bao-ping; SUN Liang

    2008-01-01

    In this paper, we have proposed an alternative seismic hazard modeling by using distributed seismicites. The distributed seismicity model does not need delineation of seismic source zones, and simplify the methodology of probabilistic seismic hazard analysis. Based on the devastating earthquake catalogue, we established three seismi- city model, derived the distribution of a-value in northern China by using Gaussian smoothing function, and cal- culated peak ground acceleration distributions for this area with 2%, 5% and 10% probability of exceedance in a 50-year period by using three attenuation models, respectively. In general, the peak ground motion distribution patterns are consistent with current seismic hazard map of China, but in some specific seismic zones which in- clude Shanxi Province and Shijiazhuang areas, our results indicated a little bit higher peak ground motions and zonation characters which are in agreement with seismicity distribution patterns in these areas. The hazard curves have been developed for Beijing, Tianjin, Taiyuan, Tangshan, and Ji'nan, the metropolitan cities in the northern China. The results showed that Tangshan, Taiyuan, Beijing has a higher seismic hazard than that of other cities mentioned above.

  2. Geology of Smooth Ridge: MARS-IODP Cabled Observatory Site

    Science.gov (United States)

    Jordahl, K. A.; Paull, C. K.; Ussler, W.; Aiello, I. W.; Mitts, P.; Greene, H. G.; Gibbs, S.

    2004-12-01

    We document the geologic environment of Smooth Ridge, off shore Central California, where the deep-water node associated with the MARS (Monterey Accelerated Research Site) scientific research cable is to be deployed. The MARS cable will provide internet connections and electric power at a node in 890 m of water in support of scientific observatory development and experiments. IODP boreholes are proposed which will be connected to the MARS cable. The deeply incised channels of Monterey and Soquel Canyons flank Smooth Ridge to the SW and NE and the San Gregorio faults marks its NW and upslope boundary. However, the top of Smooth Ridge, as its name implies, only has subdued bathymetric features. These include a subtle downslope channel and one distinct slump scar. A patch of acoustically reflective seafloor on the west side of the ridge, over 5 km from the MARS site, is associated with the only known large-scale biological community on the crest of Smooth Ridge. A reflection seismic survey conducted in 2003 with a high-resolution electrical sparker source reveals the stratigraphy of the Smooth Ridge in unprecedented detail. In conjunction with previously collected widely-spaced multichannnel seismic data, observations and samples obtained using remotely-operated vehicle (ROV) dives, and piston cores, this new survey reveals the erosional and depositional history of Smooth Ridge. The continuity of seismic reflections indicates nearly undisturbed deposition occurred until at least the mid-Miocene. Since that time, and especially since the upper Pliocene, the record is marked by unconformities and infill due to shifting channels, large slumps and landslides, and sediment waves. Several crossing seismic lines provide a quasi-three-dimensional view of a distinct slump scar's structure, and reveal a history of multiple headwall failures. Other subsurface structures, including a much larger, and older, slump feature, have no bathymetric expression at all. 14C dated piston

  3. AcquisitionFootprintAttenuationDrivenbySeismicAttributes

    Directory of Open Access Journals (Sweden)

    Cuellar-Urbano Mayra

    2014-04-01

    Full Text Available Acquisition footprint, one of the major problems that PEMEX faces in seismic imaging, is noise highly correlated to the geometric array of sources and receivers used for onshore and offshore seismic acquisitions. It prevails in spite of measures taken during acquisition and data processing. This pattern, throughout the image, is easily confused with geological features and misguides seismic attribute computation. In this work, we use seismic data from PEMEX Exploración y Producción to show the conditioning process for removing random and coherent noise using linear filters. Geometric attributes used in a workflow were computed for obtaining an acquisition footprint noise model and adaptively subtract it from the seismic data.

  4. Smoothed Particle Hydrodynamic Simulator

    Energy Technology Data Exchange (ETDEWEB)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  5. Nonequilibrium flows with smooth particle applied mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Kum, Oyeon [Univ. of California, Davis, CA (United States)

    1995-07-01

    Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.

  6. SEISMIC GEOLOGY

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    <正>20091465 Cai Xuelin(College of Earth Sciences,Chengdu University of Technology,Chengdu 610059,China);Cao Jiamin Preliminary Study on the 3-D Crust Structure for the Longmen Lithosphere and the Genesis of the Huge Wenchuan Earthquake,Sichuan Province,China(Journal of Chengdu University of Technology,ISSN1671-9727,CN51-1634/N,35(4),2008,p.357-365,8 illus.,39 refs.)Key words:deep-seated structures,large earthquakes,Longmenshan Fracture ZoneBased on a structural analysis of many seismic sounding profiles,there are two fault systems in Longmen collisional orogenic belt,Sichuan Province,China.They are both different obviously and correlative closely.One is shallow fault system composed mainly of brittle shear zones in surface crust,and the other is deep fault system composed mainly of crust-mantle ductile shear zones cutting Moho discontinuity.Based on the result of researching geological structure and seismic sounding profiles,

  7. Making Waves: Seismic Waves Activities and Demonstrations

    Science.gov (United States)

    Braile, S. J.; Braile, L. W.

    2011-12-01

    The nature and propagation of seismic waves are fundamental concepts necessary for understanding the exploration of Earth's interior structure and properties, plate tectonics, earthquakes, and seismic hazards. Investigating seismic waves is also an engaging approach to learning basic principles of the physics of waves and wave propagation. Several effective educational activities and demonstrations are available for teaching about seismic waves, including the stretching of a spring to demonstrate elasticity; slinky wave propagation activities for compressional, shear, Rayleigh and Love waves; the human wave activity to demonstrate P- and S- waves in solids and liquids; waves in water in a simple wave tank; seismic wave computer animations; simple shake table demonstrations of model building responses to seismic waves to illustrate earthquake damage to structures; processing and analysis of seismograms using free and easy to use software; and seismic wave simulation software for viewing wave propagation in a spherical Earth. The use of multiple methods for teaching about seismic waves is useful because it provides reinforcement of the fundamental concepts, is adaptable to variable classroom situations and diverse learning styles, and allows one or more methods to be used for authentic assessment. The methods described here have been used effectively with a broad range of audiences, including K-12 students and teachers, undergraduate students in introductory geosciences courses, and geosciences majors.

  8. A Generalized Eigensolver based on Smoothed Aggregation (GES-SA) for Initializing Smoothed Aggregation Multigrid (SA)

    Energy Technology Data Exchange (ETDEWEB)

    Brezina, M; Manteuffel, T; McCormick, S; Ruge, J; Sanders, G; Vassilevski, P S

    2007-05-31

    Consider the linear system Ax = b, where A is a large, sparse, real, symmetric, and positive definite matrix and b is a known vector. Solving this system for unknown vector x using a smoothed aggregation multigrid (SA) algorithm requires a characterization of the algebraically smooth error, meaning error that is poorly attenuated by the algorithm's relaxation process. For relaxation processes that are typically used in practice, algebraically smooth error corresponds to the near-nullspace of A. Therefore, having a good approximation to a minimal eigenvector is useful to characterize the algebraically smooth error when forming a linear SA solver. This paper discusses the details of a generalized eigensolver based on smoothed aggregation (GES-SA) that is designed to produce an approximation to a minimal eigenvector of A. GES-SA might be very useful as a standalone eigensolver for applications that desire an approximate minimal eigenvector, but the primary aim here is for GES-SA to produce an initial algebraically smooth component that may be used to either create a black-box SA solver or initiate the adaptive SA ({alpha}SA) process.

  9. Smooth Neighborhood Structures in a Smooth Topological Spaces

    Directory of Open Access Journals (Sweden)

    A. A. Ramadan

    2010-01-01

    Full Text Available Various concepts related to a smooth topological spaces have been introduced and relations among them studied by several authors (Chattopadhyay, Ramadan, etc. In this study, we presented the notions of three sorts of neighborhood structures of a smooth topological spaces and give some of their properties which are results by Ying extended to smooth topological spaces.

  10. Adaptive time stepping method for seismic liquefaction disasters and its control parameters%地震液化灾害自适应步长计算方法及控制参数研究

    Institute of Scientific and Technical Information of China (English)

    张西文; 唐小微; 姚霁菲; 杨令强

    2016-01-01

    Seismic liquefaction of saturated soil is a serious problem in the area of geotechnical earthquake engineering. In the numerical analysis of dynamic process, calculation accuracy and efficiency are the two important indexes to evaluate the numerical method. An adaptive time stepping method is proposed based on a solid-fluid coupled method and an elasto-plastic analysis platform. According to the estimation system of displacement errors, pore water pressure errors and mixed errors, the strategy of time step adjustment and the relevant control parameters are established. Through the sensitivity analysis of control parameters, the error tolerance and proportionality coefficient of pore water pressure errors are identified as the main control parameters, while the initial time step size, lower and upper limits of time step adjustment factor are identified as the assistant parameters. Then, the adaptive stepping method is applied in the numerical analysis of a subway station located in the liquefiable area. The time histories of the uplift displacement and the excess pore water pressure ratio are obtained, which indicates the hazards of underground structure uplift induced by seismic liquefaction. Besides, the fixed stepping and adaptive stepping methods are compared, and it is found thatusingthe adaptive stepping method can save more computational cost without losing the accuracy.%饱和砂土液化是岩土地震工程和土动力学研究领域的重要课题。在动力液化数值计算中,计算精度和计算效率一直是衡量数值方法的重要指标。在水土二相耦合弹塑性计算的数值平台上开发了自适应步长方法,通过位移误差、孔压误差和混合误差的评估体系建立了时间步长自动调整的策略及相关控制参数。通过控制参数的影响性分析,确定了主要控制参数为误差允许值和孔压误差比例系数,辅助控制参数为初始时间步长、步长调整的下限和上限。对

  11. Smoothness of limit functors

    Indian Academy of Sciences (India)

    Benedictus Margaux

    2015-05-01

    Let be a scheme. Assume that we are given an action of the one dimensional split torus $\\mathbb{G}_{m,S}$ on a smooth affine -scheme $\\mathfrak{X}$. We consider the limit (also called attractor) subfunctor $\\mathfrak{X}_{}$ consisting of points whose orbit under the given action `admits a limit at 0’. We show that $\\mathfrak{X}_{}$ is representable by a smooth closed subscheme of $\\mathfrak{X}$. This result generalizes a theorem of Conrad et al. (Pseudo-reductive groups (2010) Cambridge Univ. Press) where the case when $\\mathfrak{X}$ is an affine smooth group and $\\mathbb{G}_{m,S}$ acts as a group automorphisms of $\\mathfrak{X}$ is considered. It also occurs as a special case of a recent result by Drinfeld on the action of $\\mathbb{G}_{m,S}$ on algebraic spaces (Proposition 1.4.20 of Drinfeld V, On algebraic spaces with an action of $\\mathfrak{G}_{m}$, preprint 2013) in case is of finite type over a field.

  12. Anti-smooth muscle antibody

    Science.gov (United States)

    ... medlineplus.gov/ency/article/003531.htm Anti-smooth muscle antibody To use the sharing features on this page, please enable JavaScript. Anti-smooth muscle antibody is a blood test that detects the ...

  13. Quantitative Seismic Amplitude Analysis

    OpenAIRE

    Dey, A. K.

    2011-01-01

    The Seismic Value Chain quantifies the cyclic interaction between seismic acquisition, imaging and reservoir characterization. Modern seismic innovation to address the global imbalance in hydrocarbon supply and demand requires such cyclic interaction of both feed-forward and feed-back processes. Currently, the seismic value chain paradigm is in a feed-forward mode. Modern seismic data now have the potential to yield the best images in terms of spatial resolution, amplitude accuracy, and incre...

  14. EMBEDDING FLOWS AND SMOOTH CONJUGACY

    Institute of Scientific and Technical Information of China (English)

    ZHANGMEIRONG; LIWEIGU

    1997-01-01

    The authors use the functional equation for embedding vector fields to study smooth embedding flows of one-dimensional diffeomorphisms. The existence and uniqueness for smooth embedding flows and vector fields are proved. As an application of embedding flows, some classification results about local and giobal diffeomorphisms under smooth conjugacy are given.

  15. The seismic signal technology with variable velocity FK filtering-the auto-adapted polarization filtering%微地震信号的变速FK滤波-自适应极化滤波方法

    Institute of Scientific and Technical Information of China (English)

    朱卫星; 张春晓; 邱铁成; 修金磊; 朱雪梅

    2009-01-01

    In view of the characteristic micro-seismic signal, this article has designed the method of variable velocity FK filtering-the auto-adapted polarization filtering. When we design the FK filter factor, we determine the signal speed range in each aperture through computation of the signal mutual correlation coefficient at glide aperture in the adjacent channel. In order to remove the alias and divulging which is created by the two-dimensional fft transformation when we make the two-dimensional fft transformation to the window, we make the zero sufficient position in vertical and horizontal direction in the time window. In view of question which complex wave field wave expectation direction is indefinite when we design the auto-adapted polarization filtering factor, we compute adjacent channel polarization projection mutual correlation coefficient to confirm wave track component and then choose expectation direction in the convention polarization filtering factor for the track component direction. And then we realize the auto-adapted polarization filtering. The effect has the distinct improvement when this method is used to process the theoretical model and actual material.%针对微地震信号的特点,研究了变速FK滤波-自适应极化滤波方法.在设计FK滤波因子时,通过计算相邻道滑动时窗内信号的互相关系数,确定时窗内信号的视速度范围,实现变速FK滤波;对时窗内的信号进行二维FFT变换时,分别在时窗的纵横向上补零充位,消除信号在二维变换时造成的时空域的混跌和泄漏;在设计自适应极化滤波因子时,针对复杂波场中波的偏振方向的不确定性问题,本文通过计算相邻道信号偏振投影的最大互相关系数,确定波的跟踪分量;把波的跟踪分量作为极化滤波因子里的期望方向,改进常规滤波因子,实现自适应极化滤波.对理论模型和实际资料的处理结果表明,该方法理论正确,实际资料的处理效果也得到了明显的改善.

  16. Seismic link at plate boundary

    Indian Academy of Sciences (India)

    Faical Ramdani; Omar Kettani; Benaissa Tadili

    2015-06-01

    Seismic triggering at plate boundaries has a very complex nature that includes seismic events at varying distances. The spatial orientation of triggering cannot be reduced to sequences from the main shocks. Seismic waves propagate at all times in all directions, particularly in highly active zones. No direct evidence can be obtained regarding which earthquakes trigger the shocks. The first approach is to determine the potential linked zones where triggering may occur. The second step is to determine the causality between the events and their triggered shocks. The spatial orientation of the links between events is established from pre-ordered networks and the adapted dependence of the spatio-temporal occurrence of earthquakes. Based on a coefficient of synchronous seismic activity to grid couples, we derive a network link by each threshold. The links of high thresholds are tested using the coherence of time series to determine the causality and related orientation. The resulting link orientations at the plate boundary conditions indicate that causal triggering seems to be localized along a major fault, as a stress transfer between two major faults, and parallel to the geothermal area extension.

  17. Quantifying Similarity in Seismic Polarizations

    Science.gov (United States)

    Eaton, D. W. S.; Jones, J. P.; Caffagni, E.

    2015-12-01

    Measuring similarity in seismic attributes can help identify tremor, low S/N signals, and converted or reflected phases, in addition to diagnosing site noise and sensor misalignment in arrays. Polarization analysis is a widely accepted method for studying the orientation and directional characteristics of seismic phases via. computed attributes, but similarity is ordinarily discussed using qualitative comparisons with reference values. Here we introduce a technique for quantitative polarization similarity that uses weighted histograms computed in short, overlapping time windows, drawing on methods adapted from the image processing and computer vision literature. Our method accounts for ambiguity in azimuth and incidence angle and variations in signal-to-noise (S/N) ratio. Using records of the Mw=8.3 Sea of Okhotsk earthquake from CNSN broadband sensors in British Columbia and Yukon Territory, Canada, and vertical borehole array data from a monitoring experiment at Hoadley gas field, central Alberta, Canada, we demonstrate that our method is robust to station spacing. Discrete wavelet analysis extends polarization similarity to the time-frequency domain in a straightforward way. Because histogram distance metrics are bounded by [0 1], clustering allows empirical time-frequency separation of seismic phase arrivals on single-station three-component records. Array processing for automatic seismic phase classification may be possible using subspace clustering of polarization similarity, but efficient algorithms are required to reduce the dimensionality.

  18. Studies on seismic waves

    Institute of Scientific and Technical Information of China (English)

    张海明; 陈晓非

    2003-01-01

    The development of seismic wave study in China in the past four years is reviewed. The discussion is divided into several aspects, including seismic wave propagation in laterally homogeneous media, laterally heterogeneous media, anisotropic and porous media, surface wave and seismic wave inversion, and seismic wave study in prospecting and logging problems. Important projects in the current studies on seismic wave is suggested as the development of high efficient numerical methods, and applying them to the studies of excitation and propagation of seismic waves in complex media and strong ground motion, which will form a foundation for refined earthquake hazard analysis and prediction.

  19. The Europa Seismic Package (ESP): 2. Meeting the Environmental Challenge

    Science.gov (United States)

    Kedar, S.; Pike, W. T.; Standley, I. M.; Calcutt, S. B.; Bowles, N.; Blaes, B.; Irom, F.; Mojarradi, M.; Vance, S. D.; Bills, B. G.

    2016-10-01

    We outline a pathway for adapting the SP microseismometer delivered to InSight to provide a Europa Seismic Package that overcomes the three significant challenges in the environmental conditions, specifically gravity, temperature and radiation.

  20. Adaptive Polarization Analysis and Filtering of Station Seismic Data in Time-Frequency Domain%台站地震资料的时频域自适应极化分析和滤波

    Institute of Scientific and Technical Information of China (English)

    马见青; 李庆春; 王卫东; 王美丁; 李春兰

    2016-01-01

    -trans-form,compute instantaneous polarization attributes by eigenanalysis,and design a filtering algo-rithm in the time-frequency domain to achieve polarization filtering of multicomponent seismic signals.The specialties of this method are that the length of the time window of the covariance matrix is determined by the instantaneous frequency of the multicomponent seismic data and it can adapt to the dominant period of the desired signal.Moreover,it calculates polarization param-eters at each time-frequency point and no longer needs to perform interpolation.It is particularly accurate in processing signals with overlapping waveforms or frequencies in the time or frequency domain.The results of processing data from models and real three-component seismograms show that this method has very high clarity,high resolution,and practicability in the data analysis and processing of seismograms.This representation enables the detection of dispersion in polarization attributes,which can be further exploited to infer some physical characteristics of the medium under investigation.Moreover,this representation offers the ability to distinguish between attrib-utes that belong to different coherent events that may overlap in time but with different frequency contents separated by time-dependent frequency cutoffs.Identifying and separating different wave types are made possible by designing filters that operate in the time-frequency domain.Attributes such as azimuth,dip,and signed ellipticity can also be used to improve the filtering algorithms.

  1. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    Science.gov (United States)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  2. Modeling of seismic data in the downward continuation approach

    NARCIS (Netherlands)

    Stolk, Christiaan C.; Hoop, de Maarten V.

    2005-01-01

    Seismic data are commonly modeled by a high-frequency single scattering approximation. This amounts to a linearization in the medium coefficient about a smooth background. The discontinuities are contained in the medium perturbation. The high-frequency part of the wavefield in the background medium

  3. Seismic inverse scattering in the downward continuation approach

    NARCIS (Netherlands)

    Stolk, C.C.; Hoop, de M.V.

    2006-01-01

    Seismic data are commonly modeled by a linearization around a smooth background medium in combination with a high frequency approximation. The perturbation of the medium coefficient is assumed to contain the discontinuities. This leads to two inverse problems, first the linearized inverse problem fo

  4. Classification of smooth Fano polytopes

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for .......A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...

  5. Quantitative Seismic Amplitude Analysis

    NARCIS (Netherlands)

    Dey, A.K.

    2011-01-01

    The Seismic Value Chain quantifies the cyclic interaction between seismic acquisition, imaging and reservoir characterization. Modern seismic innovation to address the global imbalance in hydrocarbon supply and demand requires such cyclic interaction of both feed-forward and feed-back processes.

  6. Quantitative Seismic Amplitude Analysis

    NARCIS (Netherlands)

    Dey, A.K.

    2011-01-01

    The Seismic Value Chain quantifies the cyclic interaction between seismic acquisition, imaging and reservoir characterization. Modern seismic innovation to address the global imbalance in hydrocarbon supply and demand requires such cyclic interaction of both feed-forward and feed-back processes. Cur

  7. Robotization in Seismic Acquisition

    NARCIS (Netherlands)

    Blacquière, G.; Berkhout, A.J.

    2013-01-01

    The amount of sources and detectors in the seismic method follows "Moore’s Law of seismic data acquisition", i.e., it increases approximately by a factor of 10 every 10 years. Therefore automation is unavoidable, leading to robotization of seismic data acquisition. Recently, we introduced a new

  8. Efficient adaptive fuzzy control scheme

    NARCIS (Netherlands)

    Papp, Z.; Driessen, B.J.F.

    1995-01-01

    The paper presents an adaptive nonlinear (state-) feedback control structure, where the nonlinearities are implemented as smooth fuzzy mappings defined as rule sets. The fine tuning and adaption of the controller is realized by an indirect adaptive scheme, which modifies the parameters of the fuzzy

  9. Seismic base isolation by nonlinear mode localization

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y. [University of Illinois, Department of Civil and Environmental Engineering, Urbana, IL (United States); Washington University, Department of Civil and Environmental Engineering, St. Louis, MO (United States); McFarland, D.M. [University of Illinois, Department of Aerospace Engineering, Urbana, IL (United States); Vakakis, A.F. [National Technical University of Athens, Division of Mechanics (Greece); Bergman, L.A. [University of Illinois, Department of Mechanical and Industrial Engineering, Urbana, IL (United States)

    2005-03-01

    In this paper, the performance of a nonlinear base-isolation system, comprised of a nonlinearly sprung subfoundation tuned in a 1:1 internal resonance to a flexible mode of the linear primary structure to be isolated, is examined. The application of nonlinear localization to seismic isolation distinguishes this study from other base-isolation studies in the literature. Under the condition of third-order smooth stiffness nonlinearity, it is shown that a localized nonlinear normal mode (NNM) is induced in the system, which confines energy to the subfoundation and away from the primary or main structure. This is followed by a numerical analysis wherein the smooth nonlinearity is replaced by clearance nonlinearity, and the system is excited by ground motions representing near-field seismic events. The performance of the nonlinear system is compared with that of the corresponding linear system through simulation, and the sensitivity of the isolation system to several design parameters is analyzed. These simulations confirm the existence of the localized NNM, and show that the introduction of simple clearance nonlinearity significantly reduces the seismic energy transmitted to the main structure, resulting in significant attenuation in the response. (orig.)

  10. Long-term Evolution of Seismicity Rates in California Geothermal Fields

    Science.gov (United States)

    Trugman, D. T.; Shearer, P. M.; Borsa, A. A.; Fialko, Y. A.

    2015-12-01

    The temporal evolution of seismicity rates within geothermal fields provides important observational constraints on the ways in which rocks respond to natural and anthropogenic loading. We develop an iterative, regularized inversion procedure to partition the observed seismicity rate into two primary components: (1) the interaction seismicity rate due to earthquake-earthquake triggering, and (2) the time-varying background seismicity rate controlled by other time-dependent stresses, including anthropogenic forcing. We parameterize our seismicity model using an Epidemic-Type Aftershock Sequence (ETAS) framework with a background seismicity rate that varies smoothly with time. We apply our methodology to study long-term changes in seismicity rates at the Geysers and Salton Sea geothermal fields in California. At the Geysers, we find that the background seismicity rate is highly correlated with fluid injection. Seismicity at the Geysers has experienced a rate increase of approximately 50% since year 2000 and exhibits strong seasonal fluctuations, both of which can be explained by changes in fluid injection following the completion of the Santa Rosa pipeline. At the Salton Sea, the background seismicity rate has remained relatively stable since 1990, with short-term fluctuations that are not obviously modulated by fluid fluxes related to the operation of the geothermal field. The differences in the field-wide seismicity responses of the Geysers and Salton Sea to geothermal plant operation may reflect differences in in-situ reservoir conditions and local tectonics, indicating that induced seismicity may not be solely a function of fluid injection and withdrawal.

  11. The Topological Effects of Smoothing.

    Science.gov (United States)

    Shafii, S; Dillard, S E; Hlawitschka, M; Hamann, B

    2012-01-01

    Scientific data sets generated by numerical simulations or experimental measurements often contain a substantial amount of noise. Smoothing the data removes noise but can have potentially drastic effects on the qualitative nature of the data, thereby influencing its characterization and visualization via topological analysis, for example. We propose a method to track topological changes throughout the smoothing process. As a preprocessing step, we oversmooth the data and collect a list of topological events, specifically the creation and destruction of extremal points. During rendering, it is possible to select the number of topological events by interactively manipulating a merging parameter. The result that a specific amount of smoothing has on the topology of the data is illustrated using a topology-derived transfer function that relates region connectivity of the smoothed data to the original regions of the unsmoothed data. This approach enables visual as well as quantitative analysis of the topological effects of smoothing.

  12. Conservative smoothing versus artificial viscosity

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, C.; Hicks, D.L. [Michigan Technological Univ., Houghton, MI (United States); Swegle, J.W. [Sandia National Labs., Albuquerque, NM (United States). Solid and Structural Mechanics Dept.

    1994-08-01

    This report was stimulated by some recent investigations of S.P.H. (Smoothed Particle Hydrodynamics method). Solid dynamics computations with S.P.H. show symptoms of instabilities which are not eliminated by artificial viscosities. Both analysis and experiment indicate that conservative smoothing eliminates the instabilities in S.P.H. computations which artificial viscosities cannot. Questions were raised as to whether conservative smoothing might smear solutions more than artificial viscosity. Conservative smoothing, properly used, can produce more accurate solutions than the von Neumann-Richtmyer-Landshoff artificial viscosity which has been the standard for many years. The authors illustrate this using the vNR scheme on a test problem with known exact solution involving a shock collision in an ideal gas. They show that the norms of the errors with conservative smoothing are significantly smaller than the norms of the errors with artificial viscosity.

  13. Smoothness in Binomial Edge Ideals

    Directory of Open Access Journals (Sweden)

    Hamid Damadi

    2016-06-01

    Full Text Available In this paper we study some geometric properties of the algebraic set associated to the binomial edge ideal of a graph. We study the singularity and smoothness of the algebraic set associated to the binomial edge ideal of a graph. Some of these algebraic sets are irreducible and some of them are reducible. If every irreducible component of the algebraic set is smooth we call the graph an edge smooth graph, otherwise it is called an edge singular graph. We show that complete graphs are edge smooth and introduce two conditions such that the graph G is edge singular if and only if it satisfies these conditions. Then, it is shown that cycles and most of trees are edge singular. In addition, it is proved that complete bipartite graphs are edge smooth.

  14. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    Science.gov (United States)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  15. Angola Seismicity MAP

    Science.gov (United States)

    Neto, F. A. P.; Franca, G.

    2014-12-01

    The purpose of this job was to study and document the Angola natural seismicity, establishment of the first database seismic data to facilitate consultation and search for information on seismic activity in the country. The study was conducted based on query reports produced by National Institute of Meteorology and Geophysics (INAMET) 1968 to 2014 with emphasis to the work presented by Moreira (1968), that defined six seismogenic zones from macro seismic data, with highlighting is Zone of Sá da Bandeira (Lubango)-Chibemba-Oncócua-Iona. This is the most important of Angola seismic zone, covering the epicentral Quihita and Iona regions, geologically characterized by transcontinental structure tectono-magmatic activation of the Mesozoic with the installation of a wide variety of intrusive rocks of ultrabasic-alkaline composition, basic and alkaline, kimberlites and carbonatites, strongly marked by intense tectonism, presenting with several faults and fractures (locally called corredor de Lucapa). The earthquake of May 9, 1948 reached intensity VI on the Mercalli-Sieberg scale (MCS) in the locality of Quihita, and seismic active of Iona January 15, 1964, the main shock hit the grade VI-VII. Although not having significant seismicity rate can not be neglected, the other five zone are: Cassongue-Ganda-Massano de Amorim; Lola-Quilengues-Caluquembe; Gago Coutinho-zone; Cuima-Cachingues-Cambândua; The Upper Zambezi zone. We also analyzed technical reports on the seismicity of the middle Kwanza produced by Hidroproekt (GAMEK) region as well as international seismic bulletins of the International Seismological Centre (ISC), United States Geological Survey (USGS), and these data served for instrumental location of the epicenters. All compiled information made possible the creation of the First datbase of seismic data for Angola, preparing the map of seismicity with the reconfirmation of the main seismic zones defined by Moreira (1968) and the identification of a new seismic

  16. Seismic design and analysis of nuclear power plant structures

    Institute of Scientific and Technical Information of China (English)

    Pentti Varpasuo

    2013-01-01

    The seismic design and analysis of nuclear power plant (NPP) begin with the seismic hazard assessment and design ground motion development for the site.The following steps are needed for the seismic hazard assessment and design ground motion development:a.the development of regional seismo-tectonic model with seismic source areas within 500 km radius centered to the site; b.the development of strong motion prediction equations;c.logic three development for taking into account uncertainties and seismic hazard quantification; d.the development of uniform hazard response spectra for ground motion at the site; e.simulation of acceleration time histories compatible with uniform hazard response spectra.The following phase two in seismic design of NPP structures is the analysis of structural response for the design ground motion.This second phase of the process consists of the following steps:a.development of structural models of the plant buildings; b.development of the soil model underneath the plant buildings for soil-structure interaction response analysis; c.determination of in-structure response spectra for the plant buildings for the equipment response analysis.In the third phase of the seismic design and analysis the equipment is analyzed on the basis of in-structure response spectra.For this purpose the structural models of the mechanical components and piping in the plant are set up.In large 3D-structural models used today the heaviest equipment of the primary coolant circuit is included in the structural model of the reactor building.In the fourth phase the electrical equipment and automation and control equipment are seismically qualified with the aid of the in-structure spectra developed in the phase two using large three-axial shaking tables.For this purpose the smoothed envelope spectra for calculated in-structure spectra are constructed and acceleration time is fitted to these smoothed envelope spectra.

  17. Smooth analysis in Banach spaces

    CERN Document Server

    Hájek, Petr

    2014-01-01

    This bookis aboutthe subject of higher smoothness in separable real Banach spaces.It brings together several angles of view on polynomials, both in finite and infinite setting.Also a rather thorough and systematic view of the more recent results, and the authors work is given. The book revolves around two main broad questions: What is the best smoothness of a given Banach space, and its structural consequences? How large is a supply of smooth functions in the sense of approximating continuous functions in the uniform topology, i.e. how does the Stone-Weierstrass theorem generalize into in

  18. SMOOTHING BY CONVEX QUADRATIC PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He; Yu-mei Wang

    2005-01-01

    In this paper, we study the relaxed smoothing problems with general closed convex constraints. It is pointed out that such problems can be converted to a convex quadratic minimization problem for which there are good programs in software libraries.

  19. Simplified seismic risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pellissetti, Manuel; Klapp, Ulrich [AREVA NP GmbH, Erlangen (Germany)

    2011-07-01

    Within the context of probabilistic safety analysis (PSA) for nuclear power plants (NPP's), seismic risk assessment has the purpose to demonstrate that the contribution of seismic events to overall risk is not excessive. The most suitable vehicle for seismic risk assessment is a full scope seismic PSA (SPSA), in which the frequency of core damage due to seismic events is estimated. An alternative method is represented by seismic margin assessment (SMA), which aims at showing sufficient margin between the site-specific safe shutdown earthquake (SSE) and the actual capacity of the plant. Both methods are based on system analysis (fault-trees and event-trees) and hence require fragility estimates for safety relevant systems, structures and components (SSC's). If the seismic conditions at a specific site of a plant are not very demanding, then it is reasonable to expect that the risk due to seismic events is low. In such cases, the cost-benefit ratio for performing a full scale, site-specific SPSA or SMA will be excessive, considering the ultimate objective of seismic risk analysis. Rather, it will be more rational to rely on a less comprehensive analysis, used as a basis for demonstrating that the risk due to seismic events is not excessive. The present paper addresses such a simplified approach to seismic risk assessment which is used in AREVA to: - estimate seismic risk in early design stages, - identify needs to extend the design basis, - define a reasonable level of seismic risk analysis Starting from a conservative estimate of the overall plant capacity, in terms of the HCLPF (High Confidence of Low Probability of Failure), and utilizing a generic value for the variability, the seismic risk is estimated by convolution of the hazard and the fragility curve. Critical importance is attached to the selection of the plant capacity in terms of the HCLPF, without performing extensive fragility calculations of seismically relevant SSC's. A suitable basis

  20. Wetting on smooth micropatterned defects

    OpenAIRE

    Debuisson, Damien; Dufour, Renaud; Senez, Vincent; Arscott, Steve

    2011-01-01

    We develop a model which predicts the contact angle hysteresis introduced by smooth micropatterned defects. The defects are modeled by a smooth function and the contact angle hysteresis is explained using a tangent line solution. When the liquid micro-meniscus touches both sides of the defect simultaneously, depinning of the contact line occurs. The defects are fabricated using a photoresist and experimental results confirm the model. An important point is that the model is scale-independent,...

  1. Exotic smoothness and quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Asselmeyer-Maluga, T, E-mail: torsten.asselmeyer-maluga@dlr.d [German Aerospace Center, Berlin, Germany and Loyola University, New Orleans, LA (United States)

    2010-08-21

    Since the first work on exotic smoothness in physics, it was folklore to assume a direct influence of exotic smoothness to quantum gravity. Thus, the negative result of Duston (2009 arXiv:0911.4068) was a surprise. A closer look into the semi-classical approach uncovered the implicit assumption of a close connection between geometry and smoothness structure. But both structures, geometry and smoothness, are independent of each other. In this paper we calculate the 'smoothness structure' part of the path integral in quantum gravity assuming that the 'sum over geometries' is already given. For that purpose we use the knot surgery of Fintushel and Stern applied to the class E(n) of elliptic surfaces. We mainly focus our attention to the K3 surfaces E(2). Then we assume that every exotic smoothness structure of the K3 surface can be generated by knot or link surgery in the manner of Fintushel and Stern. The results are applied to the calculation of expectation values. Here we discuss the two observables, volume and Wilson loop, for the construction of an exotic 4-manifold using the knot 5{sub 2} and the Whitehead link Wh. By using Mostow rigidity, we obtain a topological contribution to the expectation value of the volume. Furthermore, we obtain a justification of area quantization.

  2. Exotic Smoothness and Quantum Gravity

    CERN Document Server

    Asselmeyer-Maluga, Torsten

    2010-01-01

    Since the first work on exotic smoothness in physics, it was folklore to assume a direct influence of exotic smoothness to quantum gravity. Thus, the negative result of Duston (arXiv:0911.4068) was a surprise. A closer look into the semi-classical approach uncovered the implicit assumption of a close connection between geometry and smoothness structure. But both structures, geometry and smoothness, are independent of each other. In this paper we calculate the "smoothness structure" part of the path integral in quantum gravity assuming that the "sum over geometries" is already given. For that purpose we use the knot surgery of Fintushel and Stern applied to the class E(n) of elliptic surfaces. We mainly focus our attention to the K3 surfaces E(2). Then we assume that every exotic smoothness structure of the K3 surface can be generated by knot or link surgery a la Fintushel and Stern. The results are applied to the calculation of expectation values. Here we discuss the two observables, volume and Wilson loop, f...

  3. Seismic Catalogue and Seismic Network in Haiti

    Science.gov (United States)

    Belizaire, D.; Benito, B.; Carreño, E.; Meneses, C.; Huerfano, V.; Polanco, E.; McCormack, D.

    2013-05-01

    The destructive earthquake occurred on January 10, 2010 in Haiti, highlighted the lack of preparedness of the country to address seismic phenomena. At the moment of the earthquake, there was no seismic network operating in the country, and only a partial control of the past seismicity was possible, due to the absence of a national catalogue. After the 2010 earthquake, some advances began towards the installation of a national network and the elaboration of a seismic catalogue providing the necessary input for seismic Hazard Studies. This paper presents the state of the works carried out covering both aspects. First, a seismic catalogue has been built, compiling data of historical and instrumental events occurred in the Hispaniola Island and surroundings, in the frame of the SISMO-HAITI project, supported by the Technical University of Madrid (UPM) and Developed in cooperation with the Observatoire National de l'Environnement et de la Vulnérabilité of Haiti (ONEV). Data from different agencies all over the world were gathered, being relevant the role of the Dominican Republic and Puerto Rico seismological services which provides local data of their national networks. Almost 30000 events recorded in the area from 1551 till 2011 were compiled in a first catalogue, among them 7700 events with Mw ranges between 4.0 and 8.3. Since different magnitude scale were given by the different agencies (Ms, mb, MD, ML), this first catalogue was affected by important heterogeneity in the size parameter. Then it was homogenized to moment magnitude Mw using the empirical equations developed by Bonzoni et al (2011) for the eastern Caribbean. At present, this is the most exhaustive catalogue of the country, although it is difficult to assess its degree of completeness. Regarding the seismic network, 3 stations were installed just after the 2010 earthquake by the Canadian Government. The data were sent by telemetry thought the Canadian System CARINA. In 2012, the Spanish IGN together

  4. Smooth quantum gravity: Exotic smoothness and Quantum gravity

    CERN Document Server

    Asselmeyer-Maluga, Torsten

    2016-01-01

    Over the last two decades, many unexpected relations between exotic smoothness, e.g. exotic $\\mathbb{R}^{4}$, and quantum field theory were found. Some of these relations are rooted in a relation to superstring theory and quantum gravity. Therefore one would expect that exotic smoothness is directly related to the quantization of general relativity. In this article we will support this conjecture and develop a new approach to quantum gravity called \\emph{smooth quantum gravity} by using smooth 4-manifolds with an exotic smoothness structure. In particular we discuss the appearance of a wildly embedded 3-manifold which we identify with a quantum state. Furthermore, we analyze this quantum state by using foliation theory and relate it to an element in an operator algebra. Then we describe a set of geometric, non-commutative operators, the skein algebra, which can be used to determine the geometry of a 3-manifold. This operator algebra can be understood as a deformation quantization of the classical Poisson alge...

  5. Imaging seismic reflections

    NARCIS (Netherlands)

    Op 't Root, Timotheus Johannes Petrus Maria

    2011-01-01

    The goal of reflection seismic imaging is making images of the Earth subsurface using surface measurements of reflected seismic waves. Besides the position and orientation of subsurface reflecting interfaces it is a challenge to recover the size or amplitude of the discontinuities. We investigate tw

  6. SOAR Telescope seismic performance II: seismic mitigation

    Science.gov (United States)

    Elias, Jonathan H.; Muñoz, Freddy; Warner, Michael; Rivera, Rossano; Martínez, Manuel

    2016-07-01

    We describe design modifications to the SOAR telescope intended to reduce the impact of future major earthquakes, based on the facility's experience during recent events, most notably the September 2015 Illapel earthquake. Specific modifications include a redesign of the encoder systems for both azimuth and elevation, seismic trigger for the emergency stop system, and additional protections for the telescope secondary mirror system. The secondary mirror protection may combine measures to reduce amplification of seismic vibration and "fail-safe" components within the assembly. The status of these upgrades is presented.

  7. Seismic texture classification. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Vinther, R.

    1997-12-31

    The seismic texture classification method, is a seismic attribute that can both recognize the general reflectivity styles and locate variations from these. The seismic texture classification performs a statistic analysis for the seismic section (or volume) aiming at describing the reflectivity. Based on a set of reference reflectivities the seismic textures are classified. The result of the seismic texture classification is a display of seismic texture categories showing both the styles of reflectivity from the reference set and interpolations and extrapolations from these. The display is interpreted as statistical variations in the seismic data. The seismic texture classification is applied to seismic sections and volumes from the Danish North Sea representing both horizontal stratifications and salt diapers. The attribute succeeded in recognizing both general structure of successions and variations from these. Also, the seismic texture classification is not only able to display variations in prospective areas (1-7 sec. TWT) but can also be applied to deep seismic sections. The seismic texture classification is tested on a deep reflection seismic section (13-18 sec. TWT) from the Baltic Sea. Applied to this section the seismic texture classification succeeded in locating the Moho, which could not be located using conventional interpretation tools. The seismic texture classification is a seismic attribute which can display general reflectivity styles and deviations from these and enhance variations not found by conventional interpretation tools. (LN)

  8. Time Critical Isosurface Refinement and Smoothing

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, V.; Bajaj, C.L.

    2000-07-10

    Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction of such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) the inability to maintain interactivity when the displayed surface changes frequently, (ii) inability to control the global geometry of the embedding (no self-intersections) of any approximated level of detail of the output surface. In this paper we introduce a technique for on-line construction and smoothing of progressive isosurfaces. Our hybrid approach combines the flexibility of a progressive multi-resolution representation with the advantages of a recursive sub-division scheme. Our main contributions are: (i) a progressive algorithm that builds a multi-resolution surface by successive refinements so that a coarse representation of the output is generated as soon as a coarse representation of the input is provided, (ii) application of the same scheme to smooth the surface by means of a 3D recursive subdivision rule, (iii) a multi-resolution representation where any adaptively selected level of detail surface is guaranteed to be free of self-intersections.

  9. The Smooth-Coated Otter in Nepal

    Directory of Open Access Journals (Sweden)

    Houghton S.J.

    1987-03-01

    Full Text Available This study has shown that the Smooth-coated otter is common along the length of the Naryani river and that it relies heavily on fish. It also suggests their feeding habits are sufficiently flexible to adapt to local variations in their food supply. A comparison of river banks suggests human activities decrease the availability of suitable habitat and over-fishing decreases food supply. Extensive deforestation in the hills causes flooding and increases the turbidity of the lowland changing both the aquatic environment and the river's topography. Pollution, resulting from chemical discharge is increasingly an important problem in Nepal. Without an effective management plan controlling these, those animal species dependent on the riverine system may rapidly decrease in number or even disappear permanently.

  10. Selective Smoothed Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The paper examines three selective schemes for the smoothed finite element method (SFEM) which was formulated by incorporating a cell-wise strain smoothing operation into the standard compatible finite element method (FEM). These selective SFEM schemes were formulated based on three selective integration FEM schemes with similar properties found between the number of smoothing cells in the SFEM and the number of Gaussian integration points in the FEM. Both scheme 1 and scheme 2 are free of nearly incompressible locking, but scheme 2 is more general and gives better results than scheme 1. In addition, scheme 2 can be applied to anisotropic and nonlinear situations, while scheme 1 can only be applied to isotropic and linear situations. Scheme 3 is free of shear locking. This scheme can be applied to plate and shell problems. Results of the numerical study show that the selective SFEM schemes give more accurate results than the FEM schemes.

  11. α-compactness in smooth topological spaces

    Directory of Open Access Journals (Sweden)

    Chun-Kee Park

    2003-01-01

    Full Text Available We introduce the concepts of smooth α-closure and smooth α-interior of a fuzzy set which are generalizations of smooth closure and smooth interior of a fuzzy set defined by Demirci (1997 and obtain some of their structural properties.

  12. Anticipative Stochastic Differential Equations with Non-smooth Diffusion Coefficient

    Institute of Scientific and Technical Information of China (English)

    Zong Xia LIANG

    2006-01-01

    In this paper we prove the existence and uniqueness of the solutions to the one-dimensional linear stochastic differential equation with Skorohod integralXt(ω) = η(ω) + ∫t0 asXs(ω)dWs + bsXs(ω)ds, t ∈ [0, 1],where (Ws) is the canonical Wiener process defined on the standard Wiener space ((W), (H),μ), a is non-smooth and adapted, but η and b may be anticipating to the filtration generated by (Ws). The intention of the paper is to eliminate the regularity of the diffusion coefficient a in the Malliavin sense, in the existing literature. The idea is to approach the non-smooth diffusion coefficient a by smooth ones.

  13. Induced and Natural Seismicity: Earthquake Hazards and Risks in Ohio:

    Science.gov (United States)

    Besana-Ostman, G. M.; Worstall, R.; Tomastik, T.; Simmers, R.

    2013-12-01

    To adapt with increasing need to regulate all operations related to both the Utica and Marcellus shale play within the state, ODNR had recently strengthen its regulatory capability through implementation of stricter permit requirements, additional human resources and improved infrastructure. These ODNR's efforts on seismic risk reduction related to induced seismicity led to stricter regulations and many infrastructure changes related particularly to Class II wells. Permit requirement changes and more seismic monitoring stations were implemented together with additional injection data reporting from selected Class II well operators. Considering the possible risks related to seismic events in a region with relatively low seismicity, correlation between limited seismic data and injection volume information were undertaken. Interestingly, initial results showed some indications of both plugging and fracturing episodes. The real-time data transmission from seismic stations and availability of injection volume data enabled ODNR to interact with operators and manage wells dynamically. Furthermore, initial geomorphic and structural analyses indicated possible active faults in the northern and western portion of the state oriented NE-SW. The newly-mapped structures imply possible relatively bigger earthquakes in the region and consequently higher seismic risks. With the above-mentioned recent changes, ODNR have made critical improvement of its principal regulatory role in the state for oil and gas operations but also an important contribution to the state's seismic risk reduction endeavors. Close collaboration with other government agencies and the public, and working together with the well operators enhanced ODNR's capability to build a safety culture and achieve further public and industry participation towards a safer environment. Keywords: Induced seismicity, injection wells, seismic risks

  14. Wetting on smooth micropatterned defects

    CERN Document Server

    Debuisson, Damien; Senez, Vincent; Arscott, Steve

    2011-01-01

    We develop a model which predicts the contact angle hysteresis introduced by smooth micropatterned defects. The defects are modeled by a smooth function and the contact angle hysteresis is explained using a tangent line solution. When the liquid micro-meniscus touches both sides of the defect simultaneously, depinning of the contact line occurs. The defects are fabricated using a photoresist and experimental results confirm the model. An important point is that the model is scale-independent, i.e. the contact angle hysteresis is dependent on the aspect ratio of the function, not on its absolute size; this could have implications for natural surface defects.

  15. Assessment of Damage in Seismically Excited RC-Structures from a Single Measured Response

    DEFF Research Database (Denmark)

    Skjærbæk, P. S.; Nielsen, Søren R. K.; Cakmak, A. S.

    1996-01-01

    A method has been developed for the localization of structural damage of substructures of seismically excited RC-structures using only the ground surface acceleration time series and a single response time series. From the response, the smoothed two lowest eigenfrequencies are estimated. The dist......A method has been developed for the localization of structural damage of substructures of seismically excited RC-structures using only the ground surface acceleration time series and a single response time series. From the response, the smoothed two lowest eigenfrequencies are estimated...

  16. Seismic comprehensive forecast based on modified project pursuit regression

    Institute of Scientific and Technical Information of China (English)

    Anxu Wu; Xiangdong Lin; Changsheng Jiang; Yongxian Zhang; Xiaodong Zhang; Mingxiao Li; Pingan Li

    2009-01-01

    In the research of projection pursuit for seismic comprehensive forecast, the algorithm of projection pursuit regression (PPR) is one of most applicable methods. But generally, the algorithm structure of the PPR is very complicated. By partial smooth regressions for many times, it has a large amount of calculation and complicated extrapolation, so it is easily trapped in partial solution. On the basis of the algorithm features of the PPR method, some solutions are given as below to aim at some shortcomings in the PPR calculation: to optimize project direction by using particle swarm optimization instead of Gauss-Newton algorithm, to simplify the optimal process with fitting ridge function by using Hermitian polynomial instead of piecewise linear regression. The overall optimal ridge function can be obtained without grouping the parameter optimization. The modeling capability and calculating accuracy of projection pursuit method are tested by means of numerical emulation technique on the basis of particle swarm optimization and Hermitian polynomial, and then applied to the seismic comprehensive forecasting models of poly-dimensional seismic time series and general disorder seismic samples. The calculation and analysis show that the projection pursuit model in this paper is characterized by simplicity, celerity and effectiveness. And this model is approved to have satisfactory effects in the real seismic comprehensive forecasting, which can be regarded as a comprehensive analysis method in seismic comprehensive forecast.

  17. The Seismic Wavefield

    Science.gov (United States)

    Kennett, B. L. N.

    2002-12-01

    The two volumes of The Seismic Wavefield are a comprehensive guide to the understanding of seismograms in terms of physical propagation processes within the Earth. The focus is on the observation of earthquakes and man-made sources on all scales, for both body waves and surface waves. Volume I provides a general introduction and a development of the theoretical background for seismic waves. Volume II looks at the way in which observed seismograms relate to the propagation processes. Volume II also discusses local and regional seismic events, global wave propagation, and the three-dimensional Earth.

  18. The 2014 Update of the United States National Seismic Hazard Maps

    Science.gov (United States)

    Petersen, M. D.; Mueller, C. S.; Moschetti, M. P.; Haller, K. M.; Zeng, Y.; Harmsen, S.; Frankel, A. D.; Rezaeian, S.; Powers, P.; Field, E. H.; Boyd, O. S.; Chen, R.; Rukstales, K. S.; Wheeler, R. L.; Luco, N.; Williams, R. A.; Olson, A.

    2013-12-01

    The USGS is in the process of updating the U.S. National Seismic Hazard Maps for the lower 48 States that will be considered for inclusion in future building codes, risk assessments, and other public policy applications. These seismic hazard maps are based on our assessment of the best available science at the time of the update, and incorporate a broad range of scientific models and parameters. The maps were discussed in regional workshops held across the U.S., reviewed by our Steering Committee, and available on-line during a 45-day period for public comment. The USGS hazard maps depict earthquake ground-shaking exceedance levels for various probabilities over a 50-year time period and are based on calculations at several hundred thousand sites across the U.S. Inputs to the hazard maps are based on scientific estimates of the locations, magnitudes, and rates of earthquakes as well as ground motion models describing each earthquake's ground shaking. We model rates of earthquakes either on known faults or as seismicity-based background earthquakes that account for unknown faults and an incomplete fault inventory. Probabilities of ground shaking are calculated from ground motion models that estimate the likely shaking caused by an earthquake. Several new datasets and models have been developed since the 2008 update of the maps. For the Central and Eastern U.S. we implemented a new moment magnitude catalog and completeness estimates, updated the maximum magnitude distribution, updated and tested the smoothing algorithms for adaptive and fixed-radius methods, extended the fault model -including the sizes and rates of New Madrid Seismic Zone earthquakes, considered induced earthquakes, and included updated and new ground motion models along with a new weighting scheme. In the Intermountain West we implemented new smoothing algorithms, fault geometry for normal faults, Wasatch fault model, and fault slip rates based on models obtained by inverting geodetic and geologic

  19. Research on seismic stress triggering

    Institute of Scientific and Technical Information of China (English)

    万永革; 吴忠良; 周公威; 黄静; 秦立新

    2002-01-01

    This paper briefly reviews basic theory of seismic stress triggering. Recent development on seismic stress triggering has been reviewed in the views of seismic static and dynamic stress triggering, application of viscoelastic model in seismic stress triggering, the relation between earthquake triggering and volcanic eruption or explosion, other explanation of earthquake triggering, etc. And some suggestions for further study on seismic stress triggering in near future are given.

  20. Dynamic smoothing of nanocomposite films

    NARCIS (Netherlands)

    Pei, Y.T.; Turkin, A; Chen, C.Q.; Shaha, K.P.; Vainshtein, D.; Hosson, J.Th.M. De

    2010-01-01

    In contrast to the commonly observed dynamic roughening in film growth we have observed dynamic smoothing in the growth of diamondlike-carbon nanocomposite (TiC/a-C) films up to 1.5 mu m thickness. Analytical and numerical simulations, based on the Edwards-Wilkinson model and the Mullins model, visu

  1. Nonlinear smoothing for random fields

    NARCIS (Netherlands)

    Aihara, Shin Ichi; Bagchi, Arunabha

    1995-01-01

    Stochastic nonlinear elliptic partial differential equations with white noise disturbances are studied in the countably additive measure set up. Introducing the Onsager-Machlup function to the system model, the smoothing problem for maximizing the modified likelihood functional is solved and the exp

  2. Calibrating an updated smoothed particle hydrodynamics scheme within gcd+

    Science.gov (United States)

    Kawata, D.; Okamoto, T.; Gibson, B. K.; Barnes, D. J.; Cen, R.

    2013-01-01

    We adapt a modern scheme of smoothed particle hydrodynamics (SPH) to our tree N-body/SPH galactic chemodynamics code gcd+. The applied scheme includes implementations of the artificial viscosity switch and artificial thermal conductivity proposed by Morris & Monaghan, Rosswog & Price and Price to model discontinuities and Kelvin-Helmholtz instabilities more accurately. We first present hydrodynamics test simulations and contrast the results to runs undertaken without artificial viscosity switch or thermal conduction. In addition, we also explore the different levels of smoothing by adopting larger or smaller smoothing lengths, i.e. a larger or smaller number of neighbour particles, Nnb. We demonstrate that the new version of gcd+ is capable of modelling Kelvin-Helmholtz instabilities to a similar level as the mesh code, athena. From the Gresho vortex, point-like explosion and self-similar collapse tests, we conclude that setting the smoothing length to keep Nnb as high as ˜58 is preferable to adopting smaller smoothing lengths. We present our optimized parameter sets from the hydrodynamics tests.

  3. Seismic migration in generalized coordinates

    Science.gov (United States)

    Arias, C.; Duque, L. F.

    2017-06-01

    Reverse time migration (RTM) is a technique widely used nowadays to obtain images of the earth’s sub-surface, using artificially produced seismic waves. This technique has been developed for zones with flat surface and when applied to zones with rugged topography some corrections must be introduced in order to adapt it. This can produce defects in the final image called artifacts. We introduce a simple mathematical map that transforms a scenario with rugged topography into a flat one. The three steps of the RTM can be applied in a way similar to the conventional ones just by changing the Laplacian in the acoustic wave equation for a generalized one. We present a test of this technique using the Canadian foothills SEG velocity model.

  4. Seismic Fault Preserving Diffusion

    CERN Document Server

    Lavialle, Olivier; Germain, Christian; Donias, Marc; Guillon, Sebastien; Keskes, Naamen; Berthoumieu, Yannick

    2007-01-01

    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non linear diffusion filtering leading to a better detection of seismic faults. The non linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.

  5. Seismic fault preserving diffusion

    Science.gov (United States)

    Lavialle, Olivier; Pop, Sorin; Germain, Christian; Donias, Marc; Guillon, Sebastien; Keskes, Naamen; Berthoumieu, Yannick

    2007-02-01

    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non-linear diffusion filtering leading to a better detection of seismic faults. The non-linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.

  6. BUILDING 341 Seismic Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Halle, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-06-15

    The Seismic Evaluation of Building 341 located at Lawrence Livermore National Laboratory in Livermore, California has been completed. The subject building consists of a main building, Increment 1, and two smaller additions; Increments 2 and 3.

  7. Seismic facies; Facies sismicas

    Energy Technology Data Exchange (ETDEWEB)

    Johann, Paulo Roberto Schroeder [PETROBRAS, Rio de Janeiro, RJ (Brazil). Exploracao e Producao Corporativo. Gerencia de Reservas e Reservatorios]. E-mail: johann@petrobras.com.br

    2004-11-01

    The method presented herein describes the seismic facies as representations of curves and vertical matrixes of the lithotypes proportions. The seismic facies are greatly interested in capturing the spatial distributions (3D) of regionalized variables, as for example, lithotypes, sedimentary facies groups and/ or porosity and/or other properties of the reservoirs and integrate them into the 3D geological modeling (Johann, 1997). Thus when interpreted as curves or vertical matrixes of proportions, seismic facies allow us to build a very important tool for structural analysis of regionalized variables. The matrixes have an important application in geostatistical modeling. In addition, this approach provides results about the depth and scale of the wells profiles, that is, seismic data is integrated to the characterization of reservoirs in depth maps and in high resolution maps. The link between the different necessary technical phases involved in the classification of the segments of seismic traces is described herein in groups of predefined traces of two approaches: a) not supervised and b) supervised by the geological knowledge available on the studied reservoir. The multivariate statistical methods used to obtain the maps of the seismic facies units are interesting tools to be used to provide a lithostratigraphic and petrophysical understanding of a petroleum reservoir. In the case studied these seismic facies units are interpreted as representative of the depositional system as a part of the Namorado Turbiditic System, Namorado Field, Campos Basin.Within the scope of PRAVAP 19 (Programa Estrategico de Recuperacao Avancada de Petroleo - Strategic Program of Advanced Petroleum Recovery) some research work on algorithms is underway to select new optimized attributes to apply seismic facies. One example is the extraction of attributes based on the wavelet transformation and on the time-frequency analysis methodology. PRAVAP is also carrying out research work on an

  8. Seismic Consequence Abstraction

    Energy Technology Data Exchange (ETDEWEB)

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  9. Very Smooth Points of Spaces of Operators

    Indian Academy of Sciences (India)

    T S S R K Rao

    2003-02-01

    In this paper we study very smooth points of Banach spaces with special emphasis on spaces of operators. We show that when the space of compact operators is an -ideal in the space of bounded operators, a very smooth operator attains its norm at a unique vector (up to a constant multiple) and ( ) is a very smooth point of the range space. We show that if for every equivalent norm on a Banach space, the dual unit ball has a very smooth point then the space has the Radon–Nikodým property. We give an example of a smooth Banach space without any very smooth points.

  10. Seismicity in Northern Germany

    Science.gov (United States)

    Bischoff, Monika; Gestermann, Nicolai; Plenefisch, Thomas; Bönnemann, Christian

    2013-04-01

    Northern Germany is a region of low tectonic activity, where only few and low-magnitude earthquakes occur. The driving tectonic processes are not well-understood up to now. In addition, seismic events during the last decade concentrated at the borders of the natural gas fields. The source depths of these events are shallow and in the depth range of the gas reservoirs. Based on these observations a causal relationship between seismicity near gas fields and the gas production is likely. The strongest of these earthquake had a magnitude of 4.5 and occurred near Rotenburg in 2004. Also smaller seismic events were considerably felt by the public and stimulated the discussion on the underlying processes. The latest seismic event occurred near Langwedel on 22nd November 2012 and had a magnitude of 2.8. Understanding the causes of the seismicity in Northern Germany is crucial for a thorough evaluation. Therefore the Seismological Service of Lower Saxony (NED) was established at the State Office for Mining, Energy and Geology (LBEG) of Lower Saxony in January 2013. Its main task is the monitoring and evaluation of the seismicity in Lower Saxony and adjacent areas. Scientific and technical questions are addressed in close cooperation with the Seismological Central Observatory (SZO) at the Federal Institute for Geosciences and Natural Resources (BGR). The seismological situation of Northern Germany will be presented. Possible causes of seismicity are introduced. Rare seismic events at greater depths are distributed over the whole region and probably are purely tectonic whereas events in the vicinity of natural gas fields are probably related to gas production. Improving the detection threshold of seismic events in Northern Germany is necessary for providing a better statistical basis for further analyses answering these questions. As a first step the existing seismic network will be densified over the next few years. The first borehole station was installed near Rethem by BGR

  11. Deep Mantle Seismic Modeling and Imaging

    Science.gov (United States)

    Lay, Thorne; Garnero, Edward J.

    2011-05-01

    Detailed seismic modeling and imaging of Earth's deep interior is providing key information about lower-mantle structures and processes, including heat flow across the core-mantle boundary, the configuration of mantle upwellings and downwellings, phase equilibria and transport properties of deep mantle materials, and mechanisms of core-mantle coupling. Multichannel seismic wave analysis methods that provide the highest-resolution deep mantle structural information include network waveform modeling and stacking, array processing, and 3D migrations of P- and S-wave seismograms. These methods detect and identify weak signals from structures that cannot be resolved by global seismic tomography. Some methods are adapted from oil exploration seismology, but all are constrained by the source and receiver distributions, long travel paths, and strong attenuation experienced by seismic waves that penetrate to the deep mantle. Large- and small-scale structures, with velocity variations ranging from a fraction of a percent to tens of percent, have been detected and are guiding geophysicists to new perspectives of thermochemical mantle convection and evolution.

  12. VSP wave separation by adaptive masking filters

    Science.gov (United States)

    Rao, Ying; Wang, Yanghua

    2016-06-01

    In vertical seismic profiling (VSP) data processing, the first step might be to separate the down-going wavefield from the up-going wavefield. When using a masking filter for VSP wave separation, there are difficulties associated with two termination ends of the up-going waves. A critical challenge is how the masking filter can restore the energy tails, the edge effect associated with these terminations uniquely exist in VSP data. An effective strategy is to implement masking filters in both τ-p and f-k domain sequentially. Meanwhile it uses a median filter, producing a clean but smooth version of the down-going wavefield, used as a reference data set for designing the masking filter. The masking filter is implemented adaptively and iteratively, gradually restoring the energy tails cut-out by any surgical mute. While the τ-p and the f-k domain masking filters target different depth ranges of VSP, this combination strategy can accurately perform in wave separation from field VSP data.

  13. Deterministic seismic hazard macrozonation of India

    Science.gov (United States)

    Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  14. Deterministic seismic hazard macrozonation of India

    Indian Academy of Sciences (India)

    Sreevalsa Kolathayar; T G Sitharam; K S Vipin

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  15. Maximal right smooth extension chains

    CERN Document Server

    Huang, Yun Bao

    2010-01-01

    If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...

  16. Using Safety Margins for a German Seismic PRA

    Directory of Open Access Journals (Sweden)

    Ralf Obenland

    2008-01-01

    Full Text Available The German regulatory guide demands the performance of a probabilistic risk assessment (PRA including external events. In 2005, a new methodology guideline (Methodenband based on the current state of science and technology was released to provide the analyst with a set of suitable tools and methodologies for the analysis of all PRA events. In the case of earthquake, a multilevel verification procedure is suggested. The verification procedure which has to be used depends on the seismic risk at the site of the plant. For sites in areas with low seismic activity no analysis or only a reduced analysis is proposed. This paper describes the evaluation of safety margins of buildings, structures, components and systems for plants at sites with high seismic risk, corresponding to the German methodology guideline. The seismic PRA results in an estimation of core damage frequencies caused by earthquakes. Additionally, the described approach can also be adapted for the usage in a reduced analysis for sites with lower earthquake risks. Westinghouse has wide experience in performing seismic PRA for both BWR as well as PWR plants. Westinghouse uses the documented set of seismic design analyses dating from construction phase and from later updates, if done, as a basis for a seismic PRA, which means that usually no costly new structural mechanics calculations have to be performed.

  17. Seismic Base Isolation Analysis for PASCAR Liquid Metal Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kuk Hee; Yoo, Bong; Kim, Yun Jae [Korea Univ., Seoul (Korea, Republic of)

    2008-10-15

    This paper presents a study for developing a seismic isolation system for the PASCAR (Proliferation resistant, Accident-tolerant, Self-supported, Capsular and Assured Reactor) liquid metal reactor design. PASCAR use lead-bismuth eutectic (LBE) as coolant. Because the density (10,000kg/m{sup 3}) of LBE coolant is very heavier than sodium coolant and water, this presents a challenge to designers of the seismic isolation systems that will be used with these heavy liquid metal reactors. Finite element analysis is adapted to determine the characteristics of the isolator device. Results are presented from a study on the use of three-dimensional seismic isolation devices to the full-scale reactor. The seismic analysis responses of the two-dimensional and the three-dimensional isolation systems for the PASCAR are compared with that of the conventional fixed base system.

  18. Pharmacology of airway smooth muscle proliferation

    NARCIS (Netherlands)

    Gosens, Reinoud; Roscioni, Sara S.; Dekkers, Bart G. J.; Pera, Tonio; Schmidt, Martina; Schaafsma, Dedmer; Zaagsma, Johan; Meurs, Herman

    2008-01-01

    Airway smooth muscle thickening is a pathological feature that contributes significantly to airflow limitation and airway hyperresponsiveness in asthma. Ongoing research efforts aimed at identifying the mechanisms responsible for the increased airway smooth muscle mass have indicated that hyperplasi

  19. Smooth Optimization Approach for Sparse Covariance Selection

    OpenAIRE

    Lu, Zhaosong

    2009-01-01

    In this paper we first study a smooth optimization approach for solving a class of nonsmooth strictly concave maximization problems whose objective functions admit smooth convex minimization reformulations. In particular, we apply Nesterov's smooth optimization technique [Y.E. Nesterov, Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543--547; Y. E. Nesterov, Math. Programming, 103 (2005), pp. 127--152] to their dual counterparts that are smooth convex problems. It is shown that the resulting approach...

  20. Landslide seismic magnitude

    Science.gov (United States)

    Lin, C. H.; Jan, J. C.; Pu, H. C.; Tu, Y.; Chen, C. C.; Wu, Y. M.

    2015-11-01

    Landslides have become one of the most deadly natural disasters on earth, not only due to a significant increase in extreme climate change caused by global warming, but also rapid economic development in topographic relief areas. How to detect landslides using a real-time system has become an important question for reducing possible landslide impacts on human society. However, traditional detection of landslides, either through direct surveys in the field or remote sensing images obtained via aircraft or satellites, is highly time consuming. Here we analyze very long period seismic signals (20-50 s) generated by large landslides such as Typhoon Morakot, which passed though Taiwan in August 2009. In addition to successfully locating 109 large landslides, we define landslide seismic magnitude based on an empirical formula: Lm = log ⁡ (A) + 0.55 log ⁡ (Δ) + 2.44, where A is the maximum displacement (μm) recorded at one seismic station and Δ is its distance (km) from the landslide. We conclude that both the location and seismic magnitude of large landslides can be rapidly estimated from broadband seismic networks for both academic and applied purposes, similar to earthquake monitoring. We suggest a real-time algorithm be set up for routine monitoring of landslides in places where they pose a frequent threat.

  1. A SAS IML Macro for Loglinear Smoothing

    Science.gov (United States)

    Moses, Tim; von Davier, Alina

    2011-01-01

    Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…

  2. Seismic velocity estimation from time migration

    Energy Technology Data Exchange (ETDEWEB)

    Cameron, Maria Kourkina [Univ. of California, Berkeley, CA (United States)

    2007-01-01

    This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of the Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less

  3. Seismic velocity estimation from time migration

    Science.gov (United States)

    Cameron, Maria Kourkina

    This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of the Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reflected signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the

  4. Studies on seismic source

    Institute of Scientific and Technical Information of China (English)

    李世愚; 陈运泰

    2003-01-01

    During the period of 1999~2002, the Chinese seismologists made a serious of developments in the study on seismic sources including observations, experiments and theory. In the field of observation, the methods of the accuracy location of earthquake sources, the inversion of seismic moment tensor and the mechanism of earthquake source are improved and developed. A lot of important earthquake events are studied by using these methods. The rupture processes of these events are inverted and investigated combined with the local stress fields and the tectonic moment by using the measurements of surface deformation. In the fields of experiments and theory, many developments are obtained in cause of seismic formation, condition of stress and tectonics, dynamics of earthquake rupture, rock fracture and nucleation of strong earthquakes.

  5. Towards Exascale Seismic Imaging and Inversion

    Science.gov (United States)

    Tromp, J.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Lei, W.; Ruan, Y.

    2015-12-01

    Post-petascale supercomputers are now available to solve complex scientific problems that were thought unreachable a few decades ago. They also bring a cohort of concerns tied to obtaining optimum performance. Several issues are currently being investigated by the HPC community. These include energy consumption, fault resilience, scalability of the current parallel paradigms, workflow management, I/O performance and feature extraction with large datasets. In this presentation, we focus on the last three issues. In the context of seismic imaging and inversion, in particular for simulations based on adjoint methods, workflows are well defined.They consist of a few collective steps (e.g., mesh generation or model updates) and of a large number of independent steps (e.g., forward and adjoint simulations of each seismic event, pre- and postprocessing of seismic traces). The greater goal is to reduce the time to solution, that is, obtaining a more precise representation of the subsurface as fast as possible. This brings us to consider both the workflow in its entirety and the parts comprising it. The usual approach is to speedup the purely computational parts based on code optimization in order to reach higher FLOPS and better memory management. This still remains an important concern, but larger scale experiments show that the imaging workflow suffers from severe I/O bottlenecks. Such limitations occur both for purely computational data and seismic time series. The latter are dealt with by the introduction of a new Adaptable Seismic Data Format (ASDF). Parallel I/O libraries, namely HDF5 and ADIOS, are used to drastically reduce the cost of disk access. Parallel visualization tools, such as VisIt, are able to take advantage of ADIOS metadata to extract features and display massive datasets. Because large parts of the workflow are embarrassingly parallel, we are investigating the possibility of automating the imaging process with the integration of scientific workflow

  6. Probabilistic seismic vulnerability and risk assessment of stone masonry structures

    Science.gov (United States)

    Abo El Ezz, Ahmad

    conducting rapid vulnerability assessment of stone masonry buildings. With modification of input structural parameters, it can be adapted and applied to any other building class. A sensitivity analysis of the seismic vulnerability modelling is conducted to quantify the uncertainties associated with each of the input parameters. The proposed methodology was validated for a scenario-based seismic risk assessment of existing buildings in Old Quebec City. The procedure for hazard compatible vulnerability modelling was used to develop seismic fragility functions in terms of spectral acceleration representative of the inventoried buildings. A total of 1220 buildings were considered. The assessment was performed for a scenario event of magnitude 6.2 at distance 15km with a probability of exceedance of 2% in 50 years. The study showed that most of the expected damage is concentrated in the old brick and stone masonry buildings.

  7. Seismic Disaster Reduction in China

    Institute of Scientific and Technical Information of China (English)

    Ministry of Construction

    2001-01-01

    @@ Great accomplishments have been made in seismic disaster reduction in China's engineering construction and city construction projects during the past decade (1990~2000). A new national map on the division of seismic intensity has been promulgated, and a series of anti-seismic standards and norms have been drafted or revised, which has further improved the country's technical code system on anti-seismic engineering measures.

  8. Strain history and TGF-β1 induce urinary bladder wall smooth muscle remodeling and elastogenesis

    OpenAIRE

    Heise, Rebecca L.; Parekh, Aron; Joyce, Erinn M.; Michael B. Chancellor; Sacks, Michael S.

    2011-01-01

    Mechanical cues that trigger pathological remodeling in smooth muscle tissues remain largely unknown and are thought to be pivotal triggers for strain-induced remodeling. Thus, an understanding of the effects mechanical stimulation is important to elucidate underlying mechanisms of disease states and in the development of methods for smooth muscle tissue regeneration. For example, the urinary bladder wall (UBW) adaptation to spinal cord injury (SCI) includes extensive hypertrophy as well as i...

  9. B341 Seismic Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Halle, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-02

    The Seismic Evaluation of Building 341 located at Lawrence Livermore National Laboratory in Livermore, California has been completed. The subject building consists of a main building, Increment 1, and two smaller additions; Increments 2 and 3. Based on our evaluation the building does not meet a Life Safety performance level for the BSE- 1E earthquake ground shaking hazard. The BSE-1E is the recommended seismic hazard level for evaluation of existing structures and is based on a 20% probability of exceedence in 50 years.

  10. Induced Seismicity Monitoring System

    Science.gov (United States)

    Taylor, S. R.; Jarpe, S.; Harben, P.

    2014-12-01

    There are many seismological aspects associated with monitoring of permanent storage of carbon dioxide (CO2) in geologic formations. Many of these include monitoring underground gas migration through detailed tomographic studies of rock properties, integrity of the cap rock and micro seismicity with time. These types of studies require expensive deployments of surface and borehole sensors in the vicinity of the CO2 injection wells. Another problem that may exist in CO2 sequestration fields is the potential for damaging induced seismicity associated with fluid injection into the geologic reservoir. Seismic hazard monitoring in CO2 sequestration fields requires a seismic network over a spatially larger region possibly having stations in remote settings. Expensive observatory-grade seismic systems are not necessary for seismic hazard deployments or small-scale tomographic studies. Hazard monitoring requires accurate location of induced seismicity to magnitude levels only slightly less than that which can be felt at the surface (e.g. magnitude 1), and the frequencies of interest for tomographic analysis are ~1 Hz and greater. We have developed a seismo/acoustic smart sensor system that can achieve the goals necessary for induced seismicity monitoring in CO2 sequestration fields. The unit is inexpensive, lightweight, easy to deploy, can operate remotely under harsh conditions and features 9 channels of recording (currently 3C 4.5 Hz geophone, MEMS accelerometer and microphone). An on-board processor allows for satellite transmission of parameter data to a processing center. Continuous or event-detected data is kept on two removable flash SD cards of up to 64+ Gbytes each. If available, data can be transmitted via cell phone modem or picked up via site visits. Low-power consumption allows for autonomous operation using only a 10 watt solar panel and a gel-cell battery. The system has been successfully tested for long-term (> 6 months) remote operations over a wide range

  11. Smoothing of Piecewise Linear Paths

    Directory of Open Access Journals (Sweden)

    Michel Waringo

    2008-11-01

    Full Text Available We present an anytime-capable fast deterministic greedy algorithm for smoothing piecewise linear paths consisting of connected linear segments. With this method, path points with only a small influence on path geometry (i.e. aligned or nearly aligned points are successively removed. Due to the removal of less important path points, the computational and memory requirements of the paths are reduced and traversing the path is accelerated. Our algorithm can be used in many different applications, e.g. sweeping, path finding, programming-by-demonstration in a virtual environment, or 6D CNC milling. The algorithm handles points with positional and orientational coordinates of arbitrary dimension.

  12. Towards a Comprehensive Catalog of Volcanic Seismicity

    Science.gov (United States)

    Thompson, G.

    2014-12-01

    Catalogs of earthquakes located using differential travel-time techniques are a core product of volcano observatories, and while vital, they represent an incomplete perspective of volcanic seismicity. Many (often most) earthquakes are too small to locate accurately, and are omitted from available catalogs. Low frequency events, tremor and signals related to rockfalls, pyroclastic flows and lahars are not systematically catalogued, and yet from a hazard management perspective are exceedingly important. Because STA/LTA detection schemes break down in the presence of high amplitude tremor, swarms or dome collapses, catalogs may suggest low seismicity when seismicity peaks. We propose to develop a workflow and underlying software toolbox that can be applied to near-real-time and offline waveform data to produce comprehensive catalogs of volcanic seismicity. Existing tools to detect and locate phaseless signals will be adapted to fit within this framework. For this proof of concept the toolbox will be developed in MATLAB, extending the existing GISMO toolbox (an object-oriented MATLAB toolbox for seismic data analysis). Existing database schemas such as the CSS 3.0 will need to be extended to describe this wider range of volcano-seismic signals. WOVOdat may already incorporate many of the additional tables needed. Thus our framework may act as an interface between volcano observatories (or campaign-style research projects) and WOVOdat. We aim to take the further step of reducing volcano-seismic catalogs to sets of continuous metrics that are useful for recognizing data trends, and for feeding alarm systems and forecasting techniques. Previous experience has shown that frequency index, peak frequency, mean frequency, mean event rate, median event rate, and cumulative magnitude (or energy) are potentially useful metrics to generate for all catalogs at a 1-minute sample rate (directly comparable with RSAM and similar metrics derived from continuous data). Our framework

  13. Income and Consumption Smoothing among US States

    DEFF Research Database (Denmark)

    Sørensen, Bent; Yosha, Oved

    states. The fraction of a shock to gross state products smoothed by the federal tax-transfer system is the same for various regions and other clubs of states. We calculate the scope for consumption smoothing within various regions and clubs, finding that most gains from risk sharing can be achieved......We quantify the amount of cross-sectional income and consumption smoothing achieved within subgroups of states, such as regions or clubs, e.g. the club of rich states. We find that there is much income smoothing between as well as within regions. By contrast, consumption smoothing occurs mainly...... within regions but not between regions. This suggests that capital markets transcend regional barriers while credit markets are regional in their nature. Smoothing within the club of rich states is accomplished mainly via capital markets whereas consumption smoothing is dominant within the club of poor...

  14. Two-dimensional magnetotelluric inversion using reflection seismic data as constraints and application in the COSC project

    Science.gov (United States)

    Yan, Ping; Kalscheuer, Thomas; Hedin, Peter; Garcia Juanatey, Maria A.

    2017-04-01

    We present a novel 2-D magnetotelluric (MT) inversion scheme, in which the local weights of the regularizing smoothness constraints are based on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. Successful application of the inversion to MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using the envelope attribute of the COSC reflection seismic profile helped to reduce the uncertainty of the interpretation of the main décollement by demonstrating that the associated alum shales may be much thinner than suggested by a previous inversion model. Thus, the new model supports the proposed location of a future borehole COSC-2 which is hoped to penetrate the main décollement and the underlying Precambrian basement.

  15. Swept Impact Seismic Technique (SIST)

    Science.gov (United States)

    Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.

    1996-01-01

    A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.

  16. The Salton Seismic Imaging Project: Seismic velocity structure of the Brawley Seismic Zone, Salton Buttes and Geothermal Field, Salton Trough, California

    Science.gov (United States)

    Delph, J.; Hole, J. A.; Fuis, G. S.; Stock, J. M.; Rymer, M. J.

    2011-12-01

    The Salton Trough is an active rift in southern California in a step-over between the plate-bounding Imperial and San Andreas Faults. In March 2011, the Salton Seismic Imaging Project (SSIP) investigated the rift's crustal structure by acquiring several seismic refraction and reflection lines. One of the densely sampled refraction lines crosses the northern-most Imperial Valley, perpendicular to the strike-slip faults and parallel to a line of small Quaternary rhyolitic volcanoes. The line crosses the obliquely extensional Brawley Seismic Zone and goes through one of the most geothermally productive areas in the United States. Well logs indicate the valley is filled by several kilometers of late Pliocene-recent lacustrine, fluvial, and shallow marine sediment. The 42-km long seismic line was comprised of eleven 110-460 kg explosive shots and receivers at a 100 m spacing. First arrival travel times were used to build a tomographic seismic velocity image of the upper crust. Velocity in the valley increases smoothly from 5 km/s, indicating diagenesis and gradational metamorphism of rift sediments at very shallow depth due to an elevated geotherm. The velocity gradient is much smaller in the relatively low velocity (Chocolate Mountains. The tomographic model shows that the shallow metasedimentary basement as well as the geothermal and volcanic activity seem to be bounded by the sharp western and eastern margins of the Brawley Seismic Zone. At this location, strongly fractured crust allows both hydrothermal and magmatic fluids to rise to the surface in the most rapidly extending portion of the rift basin.

  17. Time-dependent probabilistic seismic hazard assessment and its application to Hualien City, Taiwan

    Directory of Open Access Journals (Sweden)

    C.-H. Chan

    2013-05-01

    Full Text Available Here, we propose a time-dependent probabilistic seismic hazard assessment and apply it to Hualien City, Taiwan. A declustering catalog from 1940 to 2005 was used to build up a long-term seismicity rate model using a smoothing Kernel function. We also evaluated short-term seismicity rate perturbations according to the rate-and-state friction model, and the Coulomb stress changes imparted by earthquakes from 2006 to 2010. We assessed both long-term and short-term probabilistic seismic hazards by considering ground motion prediction equations for crustal and subduction earthquakes. The long-term seismic hazard in Hualien City gave a PGA (peak ground acceleration of 0.46 g for the 2.1‰ annual exceedance probability. The result is similar to the levels determined in previous studies. Seismic hazards were significantly elevated following the 2007 ML =5.8 earthquake that occurred approximately 10 km from Hualien City. This work presents an assessment of a suitable mechanism for time-dependent probabilistic seismic hazard determinations using an updated earthquake catalog. Using minor model assumptions, our approach provides a suitable basis for rapid re-evaluations and will benefit decision-makers and public officials regarding seismic hazard mitigation.

  18. Nonstructural seismic restraint guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Butler, D.M.; Czapinski, R.H.; Firneno, M.J.; Feemster, H.C.; Fornaciari, N.R.; Hillaire, R.G.; Kinzel, R.L.; Kirk, D.; McMahon, T.T.

    1993-08-01

    The Nonstructural Seismic Restraint Guidelines provide general information about how to secure or restrain items (such as material, equipment, furniture, and tools) in order to prevent injury and property, environmental, or programmatic damage during or following an earthquake. All SNL sites may experience earthquakes of magnitude 6.0 or higher on the Richter scale. Therefore, these guidelines are written for all SNL sites.

  19. Understanding induced seismicity

    NARCIS (Netherlands)

    Elsworth, Derek; Spiers, Christopher J.; Niemeijer, Andre R.

    2016-01-01

    Fluid injection–induced seismicity has become increasingly widespread in oil- and gas-producing areas of the United States (1–3) and western Canada. It has shelved deep geothermal energy projects in Switzerland and the United States (4), and its effects are especially acute in Oklahoma, where seismi

  20. Understanding induced seismicity

    NARCIS (Netherlands)

    Elsworth, Derek; Spiers, Christopher J.; Niemeijer, Andre R.

    2016-01-01

    Fluid injection–induced seismicity has become increasingly widespread in oil- and gas-producing areas of the United States (1–3) and western Canada. It has shelved deep geothermal energy projects in Switzerland and the United States (4), and its effects are especially acute in Oklahoma, where

  1. Mobile seismic exploration

    Science.gov (United States)

    Dräbenstedt, A.; Cao, X.; Polom, U.; Pätzold, F.; Zeller, T.; Hecker, P.; Seyfried, V.; Rembe, C.

    2016-06-01

    Laser-Doppler-Vibrometry (LDV) is an established technique to measure vibrations in technical systems with picometer vibration-amplitude resolution. Especially good sensitivity and resolution can be achieved at an infrared wavelength of 1550 nm. High-resolution vibration measurements are possible over more than 100 m distance. This advancement of the LDV technique enables new applications. The detection of seismic waves is an application which has not been investigated so far because seismic waves outside laboratory scales are usually analyzed at low frequencies between approximately 1 Hz and 250 Hz and require velocity resolutions in the range below 1 nm/s/√Hz. Thermal displacements and air turbulence have critical influences to LDV measurements at this low-frequency range leading to noise levels of several 100 nm/√Hz. Commonly seismic waves are measured with highly sensitive inertial sensors (geophones or Micro Electro-Mechanical Sensors (MEMS)). Approaching a laser geophone based on LDV technique is the topic of this paper. We have assembled an actively vibration-isolated optical table in a minivan which provides a hole in its underbody. The laser-beam of an infrared LDV assembled on the optical table impinges the ground below the car through the hole. A reference geophone has detected remaining vibrations on the table. We present the results from the first successful experimental demonstration of contactless detection of seismic waves from a movable vehicle with a LDV as laser geophone.

  2. Mobile seismic exploration

    Energy Technology Data Exchange (ETDEWEB)

    Dräbenstedt, A., E-mail: a.draebenstedt@polytec.de, E-mail: rembe@iei.tu-clausthal.de, E-mail: ulrich.polom@liag-hannover.de; Seyfried, V. [Research & Development, Polytec GmbH, Waldbronn (Germany); Cao, X.; Rembe, C., E-mail: a.draebenstedt@polytec.de, E-mail: rembe@iei.tu-clausthal.de, E-mail: ulrich.polom@liag-hannover.de [Institute of Electrical Information Technology, TU Clausthal, Clausthal-Zellerfeld (Germany); Polom, U., E-mail: a.draebenstedt@polytec.de, E-mail: rembe@iei.tu-clausthal.de, E-mail: ulrich.polom@liag-hannover.de [Leibniz Institute of Applied Geophysics, Hannover (Germany); Pätzold, F.; Hecker, P. [Institute of Flight Guidance, TU Braunschweig, Braunschweig (Germany); Zeller, T. [Clausthaler Umwelttechnik Institut CUTEC, Clausthal-Zellerfeld (Germany)

    2016-06-28

    Laser-Doppler-Vibrometry (LDV) is an established technique to measure vibrations in technical systems with picometer vibration-amplitude resolution. Especially good sensitivity and resolution can be achieved at an infrared wavelength of 1550 nm. High-resolution vibration measurements are possible over more than 100 m distance. This advancement of the LDV technique enables new applications. The detection of seismic waves is an application which has not been investigated so far because seismic waves outside laboratory scales are usually analyzed at low frequencies between approximately 1 Hz and 250 Hz and require velocity resolutions in the range below 1 nm/s/√Hz. Thermal displacements and air turbulence have critical influences to LDV measurements at this low-frequency range leading to noise levels of several 100 nm/√Hz. Commonly seismic waves are measured with highly sensitive inertial sensors (geophones or Micro Electro-Mechanical Sensors (MEMS)). Approaching a laser geophone based on LDV technique is the topic of this paper. We have assembled an actively vibration-isolated optical table in a minivan which provides a hole in its underbody. The laser-beam of an infrared LDV assembled on the optical table impinges the ground below the car through the hole. A reference geophone has detected remaining vibrations on the table. We present the results from the first successful experimental demonstration of contactless detection of seismic waves from a movable vehicle with a LDV as laser geophone.

  3. Geophysics and Seismic Hazard Reduction

    Institute of Scientific and Technical Information of China (English)

    YuGuihua; ZhouYuanze; YuSheng

    2003-01-01

    The earthquake is a natural phenomenon, which often brings serious hazard to the human life and material possession. It is a physical process of releasing interior energy of the earth, which is caused by interior and outer forces in special tectonic environment in the earth, especially within the lithosphere. The earthquake only causes casualty and loss in the place where people inhabit. Seismic hazard reduction is composed of four parts as seismic prediction, hazard prevention and seismic engineering, seismic response and seismic rescuing, and rebuilding.

  4. Efficient sinogram smoothing for dynamic neuroreceptor PET imaging

    Science.gov (United States)

    Pan, Xiaochuan; La Riviere, Patrick J.; Ye, James; Mukherjee, J.; Chen, Chin-Tu

    1997-05-01

    We have developed image-restoration techniques applicable to dynamic positron emission tomography that improve the visual quality and quantitative accuracy of neuroreceptor images. Starting wit data from a study of dopamine D-2 receptors in rhesus monkey striata using selective radioligands such as fallypride, we performed a novel effective 3D smoothing of the dynamic sinogram at a much lower computational cost than a truly 3D, adaptive smoothing. The processed sinogram was then input to a standard filtered back-projection algorithm and the resulting images were sharper and less noisy than images reconstructed from the unprocessed sinogram. Simulations were performed and the radioligand binding curves extracted from the restored images were found to be smoother and more accurate than those extracted form the unprocessed reconstructions. Comparison was also made to reconstructions from sinograms processed by the principal component analysis/projection onto convex sets algorithm.

  5. Modelling free surface flows with smoothed particle hydrodynamics

    Directory of Open Access Journals (Sweden)

    L.Di G.Sigalotti

    2006-01-01

    Full Text Available In this paper the method of Smoothed Particle Hydrodynamics (SPH is extended to include an adaptive density kernel estimation (ADKE procedure. It is shown that for a van der Waals (vdW fluid, this method can be used to deal with free-surface phenomena without difficulties. In particular, arbitrary moving boundaries can be easily handled because surface tension is effectively simulated by the cohesive pressure forces. Moreover, the ADKE method is seen to increase both the accuracy and stability of SPH since it allows the width of the kernel interpolant to vary locally in a way that only the minimum necessary smoothing is applied at and near free surfaces and sharp fluid-fluid interfaces. The method is robust and easy to implement. Examples of its resolving power are given for both the formation of a circular liquid drop under surface tension and the nonlinear oscillation of excited drops.

  6. High Voltage Seismic Generator

    Science.gov (United States)

    Bogacz, Adrian; Pala, Damian; Knafel, Marcin

    2015-04-01

    This contribution describes the preliminary result of annual cooperation of three student research groups from AGH UST in Krakow, Poland. The aim of this cooperation was to develop and construct a high voltage seismic wave generator. Constructed device uses a high-energy electrical discharge to generate seismic wave in ground. This type of device can be applied in several different methods of seismic measurement, but because of its limited power it is mainly dedicated for engineering geophysics. The source operates on a basic physical principles. The energy is stored in capacitor bank, which is charged by two stage low to high voltage converter. Stored energy is then released in very short time through high voltage thyristor in spark gap. The whole appliance is powered from li-ion battery and controlled by ATmega microcontroller. It is possible to construct larger and more powerful device. In this contribution the structure of device with technical specifications is resented. As a part of the investigation the prototype was built and series of experiments conducted. System parameter was measured, on this basis specification of elements for the final device were chosen. First stage of the project was successful. It was possible to efficiently generate seismic waves with constructed device. Then the field test was conducted. Spark gap wasplaced in shallowborehole(0.5 m) filled with salt water. Geophones were placed on the ground in straight line. The comparison of signal registered with hammer source and sparker source was made. The results of the test measurements are presented and discussed. Analysis of the collected data shows that characteristic of generated seismic signal is very promising, thus confirms possibility of practical application of the new high voltage generator. The biggest advantage of presented device after signal characteristics is its size which is 0.5 x 0.25 x 0.2 m and weight approximately 7 kg. This features with small li-ion battery makes

  7. Ground motion input in seismic evaluation studies

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants.

  8. Dynamic Bayesian filtering for real-time seismic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Blough, D.K.; Rohay, A.C.; Anderson, K.K.; Nicholson, W.L.

    1994-04-01

    State space modeling, which includes techniques such as the Kalman filter, has been used to analyze many non-stationary time series. The ability of these dynamic models to adapt and track changes in the underlying process makes them attractive for application to the real-time analysis of three-component seismic waveforms. The authors are investigating the application of state space models formulated as Bayesian time series models to phase detection, polarization, and spectrogram estimation of seismograms. This approach removes the need to specify data windows in the time series for time averaging estimation (e.g., spectrum estimation). They are using this model to isolate particular seismic phases based on polarization parameters that are determined at a spectrum of frequencies. They plan to use polarization parameters, frequency spectra, and magnitudes to discriminate between different types of seismic sources. They present the application of this technique to artificial time series and to several real seismic events including the Non-Proliferation Experiment (NPE) two nuclear tests and three earthquakes from the Nevada Test site, as recorded on several regional broadband seismic stations. A preliminary result of this analysis indicates that earthquakes and explosions can potentially be discriminated on the bass of the polarization characteristics of scattered seismic phases. However, the chemical (NPE) and nuclear explosions appear to have very similar polarization characteristics.

  9. Bessel smoothing filter for spectral-element mesh

    Science.gov (United States)

    Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.

    2017-06-01

    Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the

  10. Crk-associated substrate, vascular smooth muscle and hypertension

    Institute of Scientific and Technical Information of China (English)

    Dale D. TANG

    2008-01-01

    Hypertension is characterized by vascular smooth muscle constriction and vascular remodeling involving cell migration, hypertrophy and growth. Crk-associated substrate (CAS), the first discovered member of the adapter protein CAS family, has been shown to be a critical cellular component that regulates various smooth muscle functions. In this review, the molecular structure and protein interactions of the CAS family members are summarized. Evidence for the role of CAS in the regu-lation of vascular smooth muscle contractility is pre-sented. Contraction stimulation induces CAS phosphor-ylation on Tyr-410 in arterial smooth muscle, creating the binding site for the Src homology (SH) 2/SH3 protein Crkll, which activates neuronal Wiskott-Aldrich syn-drome protein (N-WASP)-mediated actin assembly and force development. The functions of CAS in cell migra-tion, hypertrophy and growth are also summarized. Abelson tyrosine kinase (Abl), c-Src, focal adhesion kinase (FAK), proline-rich tyrosine kinase 2 (PYK2), pro-tein tyrosine phosphatase-proline, glutamate, serine and threonine sequence protein (PTP-PEST) and SHP-2 have been documented to coordinate the phosphorylation and dephosphorylation of CAS. The downstream signaling partners of CAS in the context of cell motility, hyper-trophy, survival and growth are also discussed. These new findings establish the important role of CAS in the modulation of vascular smooth muscle functions. Furthermore, the upstream regulators of CAS may be new biologic targets for the development of more effective and specific treatment of cardiovascular diseases such as hypertension.

  11. Is there seismic attenuation in the mantle?

    Science.gov (United States)

    Ricard, Y.; Durand, S.; Montagner, J.-P.; Chambat, F.

    2014-02-01

    The small scale heterogeneity of the mantle is mostly due to the mixing of petrological heterogeneities by a smooth but chaotic convection and should consist in a laminated structure (marble cake) with a power spectrum S(k) varying as 1/k, where k is the wavenumber of the anomalies. This distribution of heterogeneities during convective stirring with negligible diffusion, called Batchelor regime is documented by fluid dynamic experiments and corresponds to what can be inferred from geochemistry and seismic tomography. This laminated structure imposes density, seismic velocity and potentially, anisotropic heterogeneities with similar 1/k spectra. A seismic wave of wavenumber k0 crossing such a medium is partly reflected by the heterogeneities and we show that the scattered energy is proportional to k0S(2k0). The reduction of energy for the propagating wave appears therefore equivalent to a quality factor 1/Q∝k0S(2k0). With the specific 1/k spectrum of the mantle, the resulting apparent attenuation should therefore be frequency independent. We show that the total contribution of 6-9% RMS density, velocity and anisotropy would explain the observed S and P attenuation of the mantle. Although these values are large, they are not unreasonable and we discuss how they depend on the range of frequencies over which the attenuation is explained. If such a level of heterogeneity were present, most of the attenuation of the Earth would be due to small scale scattering by laminations, not by intrinsic dissipation. Intrinsic dissipation must certainly exist but might correspond to a larger, yet unobserved Q. This provocative result would explain the very weak frequency dependence of the attenuation, and the fact that bulk attenuation seems negligible, two observations that have been difficult to explain for 50 years.

  12. Sizes of mantle heteogeneities and seismic attenuation

    Science.gov (United States)

    Ricard, Y. R.; durand, S.; Chambat, F.; Montagner, J.

    2013-12-01

    The small scale heterogeneity of the mantle, being mostly due to the mixing of petrological heterogeneities by a smooth but chaotic convection should consist in a laminated structure (marble cake) with a power spectrum S(k) varying as 1/k, where k is the wavenumber of the anomalies. This distribution of heterogeneities during convective stirring with negligible diffusion, called Batchelor regime is documented by fluid dynamic experiments and corresponds to what can be inferred from geochemistry and seismic tomography. This laminated structure imposes density, seismic velocity and potentially, anisotropic heterogeneities with similar 1/k spectrums. We show that a seismic wave of wavenumber k_0 crossing such medium is partly reflected by the heterogeneities and the scattered energy has an energy found proportional to k_0 S(2k_0). The reduction of energy for the propagating wave appears therefore equivalent to a quality factor 1/Q proportional to k_0 S(2k_0). With the specific 1/k spectrum of the mantle, the resulting apparent attenuation should therefore be frequency independent. We show that the total contribution of 6-9% RMS density, velocity and anisotropy would explain the observed S and P attenuation of the mantle. Although these values are large there are not unreasonable and we discuss how they are likely overestimated. In this case, most of the attenuation of the Earth would be due to small scale scattering by laminations not by intrinsic dissipation. Intrinsic dissipation must certainly exists but might correspond to a larger, yet unobserved Q. This provocative result would explain the observed very weak frequency dependence of the attenuation, and the fact that bulk attenuation seems negligeable, two observations that have been difficult to explain for 50 years.

  13. Resolution of smooth group actions

    CERN Document Server

    Albin, Pierre

    2010-01-01

    A refined form of the `Folk Theorem' that a smooth action by a compact Lie group can be (canonically) resolved, by iterated blow up, to have unique isotropy type is proved in the context of manifolds with corners. This procedure is shown to capture the simultaneous resolution of all isotropy types in a `resolution structure' consisting of equivariant iterated fibrations of the boundary faces. This structure projects to give a similar resolution structure for the quotient. In particular these results apply to give a canonical resolution of the radial compactification, to a ball, of any finite dimensional representation of a compact Lie group; such resolutions of the normal action of the isotropy groups appear in the boundary fibers in the general case.

  14. Smooth ergodic theory for endomorphisms

    CERN Document Server

    Qian, Min; Zhu, Shu

    2009-01-01

    This volume presents a general smooth ergodic theory for deterministic dynamical systems generated by non-invertible endomorphisms, mainly concerning the relations between entropy, Lyapunov exponents and dimensions. The authors make extensive use of the combination of the inverse limit space technique and the techniques developed to tackle random dynamical systems. The most interesting results in this book are (1) the equivalence between the SRB property and Pesin’s entropy formula; (2) the generalized Ledrappier-Young entropy formula; (3) exact-dimensionality for weakly hyperbolic diffeomorphisms and for expanding maps. The proof of the exact-dimensionality for weakly hyperbolic diffeomorphisms seems more accessible than that of Barreira et al. It also inspires the authors to argue to what extent the famous Eckmann-Ruelle conjecture and many other classical results for diffeomorphisms and for flows hold true. After a careful reading of the book, one can systematically learn the Pesin theory for endomorphis...

  15. Learning Smooth Pattern Transformation Manifolds

    CERN Document Server

    Vural, Elif

    2011-01-01

    Manifold models provide low-dimensional representations that are useful for processing and analyzing data in a transformation-invariant way. In this paper, we study the problem of learning smooth pattern transformation manifolds from image sets that represent observations of geometrically transformed signals. In order to construct a manifold, we build a representative pattern whose transformations accurately fit various input images. We examine two objectives of the manifold building problem, namely, approximation and classification. For the approximation problem, we propose a greedy method that constructs a representative pattern by selecting analytic atoms from a continuous dictionary manifold. We present a DC (Difference-of-Convex) optimization scheme that is applicable to a wide range of transformation and dictionary models, and demonstrate its application to transformation manifolds generated by rotation, translation and anisotropic scaling of a reference pattern. Then, we generalize this approach to a s...

  16. Reassigned time-frequency peak filtering for seismic random noise attenuation

    Science.gov (United States)

    Lin, H.; Li, Y.; Ma, H.

    2012-12-01

    Seismic noise attenuation for the aim of improving signal-to-noise ratio (S/N) plays an important role in seismic data processing for detailed description of oil and gas reservoirs. In particular, strong seismic random noise, which is unpredictable and incoherent in space and time, always degrades the qualities of seismic exploration and much more difficult to be suppressed than coherent noise, since only its statistical properties can be used. It is a common problem in random noise attenuation to keep the signal with minimized distortion. Multi-direction, multi-scale and time-varying methods can be considered as appropriate for tracking the signal characteristics varying in time. In particular, time-frequency based methods might better recover the local characteristics of the non-stationary seismic signal, which is important to produce a satisfactory random noise attenuation result. Time-frequency peak filtering(TFPF), which has already proved to be a powerful tool for Gaussian random noise attenuation in linear signal, can be alternative tool for seismic random noise attenuation. Indeed, seismic noise sometimes may have an asymmetric Wigner-Ville spectrum(WVS) and the seismic signal is nonlinear in time, which might induce amplitude attenuation and residual random noise in the results. This work reports the preliminary results from an improved TFPF method planned to obtain more accurate estimation of the seismic signal by increasing the signal concentration of the time-frequency distribution(TFD) during TFPF. At the beginning the improved reassignment TFPF(RTFPF) encoded the seismic trace as an instantaneous frequency (IF) of the analytic signal generated by frequency modulation. After that the smooth pseudo Wigner-Ville distribution(SPWVD) of the coded analytic signal was computed. The separate frequency window of the SPWVD helps to smooth away the random oscillations introduced by the WVS of seismic noise and nonlinear signal component in the pseudo Wigner

  17. Smooth halos in the cosmic web

    CERN Document Server

    Gaite, Jose

    2014-01-01

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of equality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in $N$-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these "smoothness sizes" have no direct relation to the virial radii.

  18. Smooth GERBS, orthogonal systems and energy minimization

    Science.gov (United States)

    Dechevsky, Lubomir T.; Zanaty, Peter

    2013-12-01

    New results are obtained in three mutually related directions of the rapidly developing theory of generalized expo-rational B-splines (GERBS) [7, 6]: closed-form computability of C∞-smooth GERBS in terms of elementary and special functions, Hermite interpolation and least-squares best approximation via smooth GERBS, energy minimizing properties of smooth GERBS similar to those of the classical cubic polynomial B-splines.

  19. Smooth Crossed Products of Rieffel's Deformations

    Science.gov (United States)

    Neshveyev, Sergey

    2014-03-01

    Assume is a Fréchet algebra equipped with a smooth isometric action of a vector group V, and consider Rieffel's deformation of . We construct an explicit isomorphism between the smooth crossed products and . When combined with the Elliott-Natsume-Nest isomorphism, this immediately implies that the periodic cyclic cohomology is invariant under deformation. Specializing to the case of smooth subalgebras of C*-algebras, we also get a simple proof of equivalence of Rieffel's and Kasprzak's approaches to deformation.

  20. Airway Epithelium Stimulates Smooth Muscle Proliferation

    OpenAIRE

    Malavia, Nikita K.; Raub, Christopher B.; Mahon, Sari B.; Brenner, Matthew; Reynold A Panettieri; George, Steven C.

    2009-01-01

    Communication between the airway epithelium and stroma is evident during embryogenesis, and both epithelial shedding and increased smooth muscle proliferation are features of airway remodeling. Hence, we hypothesized that after injury the airway epithelium could modulate airway smooth muscle proliferation. Fully differentiated primary normal human bronchial epithelial (NHBE) cells at an air–liquid interface were co-cultured with serum-deprived normal primary human airway smooth muscle cells (...

  1. Properties of the extremal infinite smooth words

    Directory of Open Access Journals (Sweden)

    Srecko Brlek

    2007-05-01

    Full Text Available Smooth words are connected to the Kolakoski sequence. We construct the maximal and the minimal infinite smooth words, with respect to the lexicographical order. The naive algorithm generating them is improved by using a reduction of the De Bruijn graph of their factors. We also study their Lyndon factorizations. Finally, we show that the minimal smooth word over the alphabet {1,3} belongs to the orbit of the Fibonacci word.

  2. Comparison of seismic sources for shallow seismic: sledgehammer and pyrotechnics

    Directory of Open Access Journals (Sweden)

    Brom Aleksander

    2015-10-01

    Full Text Available The pyrotechnic materials are one of the types of the explosives materials which produce thermal, luminous or sound effects, gas, smoke and their combination as a result of a self-sustaining chemical reaction. Therefore, pyrotechnics can be used as a seismic source that is designed to release accumulated energy in a form of seismic wave recorded by tremor sensors (geophones after its passage through the rock mass. The aim of this paper was to determine the utility of pyrotechnics for shallow seismic engineering. The work presented comparing the conventional method of seismic wave excitation for seismic refraction method like plate and hammer and activating of firecrackers on the surface. The energy released by various sources and frequency spectra was compared for the two types of sources. The obtained results did not determine which sources gave the better results but showed very interesting aspects of using pyrotechnics in seismic measurements for example the use of pyrotechnic materials in MASW.

  3. Ross Ice Shelf Seismic Survey and Future Drilling Recommendation

    Science.gov (United States)

    van Haastrecht, Laurine; Ohneiser, Christian; Gorman, Andrew; Hulbe, Christina

    2016-04-01

    The Ross Ice Shelf (RIS) is one of three gateways through which change in the ocean can be propagated into the interior of West Antarctica. Both the geologic record and ice sheet models indicate that it has experienced widespread retreat under past warm climates. But inland of the continental shelf, there are limited data available to validate the models. Understanding what controls the rate at which the ice shelf will respond to future climate change is central to making useful climate projections. Determining the retreat rate at the end of the last glacial maximum is one part of this challenge. In November 2015, four lines of multi-channel seismic data, totalling over 45 km, were collected on the Ross Ice Shelf, approximately 300 km south of Ross Island using a thumper seismic source and a 96 channel snow streamer. The seismic survey was undertaken under the New Zealand Antarctic Research Institute (NZARI) funded Aotearoa New Zealand Ross Ice Shelf Programme to resolve bathymetric details and to image sea floor sediments under a proposed drilling site on the ice shelf, at about 80.7 S and 174 E. The thumper, a purpose-built, trailer mounted, weight-drop seismic source was towed behind a Hägglund tracked vehicle to image the bathymetry and sediments underneath the RIS. Seismic data collection on an ice shelf has unique challenges, in particular strong attenuation of the seismic energy by snow and firn, and complex multiple ray paths. The thumper, which consists of a heavy weight (250kg) that is dropped on a large, ski mounted steel plate, produced a consistent, repeatable higher energy signal when compared to sledge hammer source and allowed for a greater geographic coverage and lower environmental impact than an explosive source survey. Our survey revealed that the seafloor is smooth and that there may be up to 100 m of layered sediments beneath the seafloor and possibly deeper, more complex structures. A multiple generated by internally reflected seismic energy

  4. Smoothing techniques for macromolecular global optimization

    Energy Technology Data Exchange (ETDEWEB)

    More, J.J.; Wu, Zhijun

    1995-09-01

    We study global optimization problems that arise in macromolecular modeling, and the solution of these problems via continuation and smoothing. Our results unify and extend the theory associated with the use of the Gaussian transform for smoothing. We show that the, Gaussian transform can be viewed as a special case of a generalized transform and that these generalized transforms share many of the properties of the Gaussian transform. We also show that the smoothing behavior of the generalized transform can be studied in terms of the Fourier transform and that these results indicate that the Gaussian transform has superior smoothing properties.

  5. Geometry Transition in the Cocos Plate, from Flat-Steep to Constant Dip: Smooth or Abrupt?

    Science.gov (United States)

    Perez-Campos, X.; Clayton, R. W.; Brudzinski, M. R.; Valdés-González, C. M.; Cabral-Cano, E.; Arciniega-Ceballos, A.; Córdoba-Montiel, F.

    2013-05-01

    Subduction of the Cocos Plate beneath North America has a variable and complex behavior along the Middle-American Trench. Initially, its geometry was delineated from regional seismicity. In the last 10 years, seismic experiments have illuminated some details in the geometry. They have reported, from NW to SE an abrupt dip transition, from 50 to 26°, as the result of a tear that splits Cocos North from Cocos South; then there is a smooth transition to a horizontal geometry under central Mexico. Further southeast, under the Isthmus of Tehuantepec, the Cocos plate shows a constant ~26° subduction dip. This last transition has been assumed to be smooth from the sparse seismicity in the region. A first glimpse of the slab geometry under Oaxaca, shows the slab continues to be flat at least until 97.5°W longitude, where the slab suddenly changes to a ~55° dip to the northeast. This occurs at a distance of ~75 km from the Pico de Orizaba volcano, which is a similar distance as the active Popocatepetl volcano from the place where the slab dives into the mantle along the Meso-American Subduction Experiment line, in central Mexico. East of this region, receiver function images show an abrupt change in the geometry and length of the slab.

  6. REGULATION OF SEISMIC LOAD ON BUILDINGS SEISMIC DEVICES

    Directory of Open Access Journals (Sweden)

    Kh. N. Mazhiev

    2013-01-01

    Full Text Available The issues of regulation of seismic loads on structures using kinematic supports of highstrength concrete on the impregnated coarse aggregate and seismic isolation bearings Belleville. The results of experimental studies related to the obtaining of a new coarse aggregate and construction of seismic isolation bearings. Addresses the issues of interaction forces in thehemispherical supports vibration process.

  7. Stutter seismic source

    Energy Technology Data Exchange (ETDEWEB)

    Gumma, W. H.; Hughes, D. R.; Zimmerman, N. S.

    1980-08-12

    An improved seismic prospecting system comprising the use of a closely spaced sequence of source initiations at essentially the same location to provide shorter objective-level wavelets than are obtainable with a single pulse. In a preferred form, three dynamite charges are detonated in the same or three closely spaced shot holes to generate a downward traveling wavelet having increased high frequency content and reduced content at a peak frequency determined by initial testing.

  8. Principle and Program of Evaluating Diffuse Seismicity

    Institute of Scientific and Technical Information of China (English)

    Chang Xiangdong

    2001-01-01

    Concept and origin of the term "the diffuse seismicity" are illustrated. Some different viewpoints regarding the diffuse seismicity and the influence characteristics on determining seismic design basis of engineering from the seismicity are analyzed. Principle and program for evaluating diffuse seismicity are studied and discussed base on over contents.

  9. Establishing seismic design criteria to achieve an acceptable seismic margin

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, R.P. [RPK Structural Mechanics Consulting, Inc., Yorba Linda, CA (United States)

    1997-01-01

    In order to develop a risk based seismic design criteria the following four issues must be addressed: (1) What target annual probability of seismic induced unacceptable performance is acceptable? (2). What minimum seismic margin is acceptable? (3) Given the decisions made under Issues 1 and 2, at what annual frequency of exceedance should the Safe Shutdown Earthquake ground motion be defined? (4) What seismic design criteria should be established to reasonably achieve the seismic margin defined under Issue 2? The first issue is purely a policy decision and is not addressed in this paper. Each of the other three issues are addressed. Issues 2 and 3 are integrally tied together so that a very large number of possible combinations of responses to these two issues can be used to achieve the target goal defined under Issue 1. Section 2 lays out a combined approach to these two issues and presents three potentially attractive combined resolutions of these two issues which reasonably achieves the target goal. The remainder of the paper discusses an approach which can be used to develop seismic design criteria aimed at achieving the desired seismic margin defined in resolution of Issue 2. Suggestions for revising existing seismic design criteria to more consistently achieve the desired seismic margin are presented.

  10. Integrating fault and seismological data into a probabilistic seismic hazard model for Italy.

    Science.gov (United States)

    Valentini, Alessandro; Visini, Francesco; Pace, Bruno

    2017-04-01

    We present the results of new probabilistic seismic hazard analysis (PSHA) for Italy based on active fault and seismological data. Combining seismic hazard from active fault with distributed seismic sources (where there are no data on active faults) is the backbone of this work. Far away from identifying a best procedure, currently adopted approaches combine active faults and background sources applying a threshold magnitude, generally between 5.5 and 7, over which seismicity is modelled by faults, and under which is modelled by distributed sources or area sources. In our PSHA we (i) apply a new method for the treatment of geologic data of major active faults and (ii) propose a new approach to combine these data with historical seismicity to evaluate PSHA for Italy. Assuming that deformation is concentrated in correspondence of fault, we combine the earthquakes occurrences derived from the geometry and slip rates of the active faults with the earthquakes from the spatially smoothed earthquake sources. In the vicinity of an active fault, the smoothed seismic activity is gradually reduced by a fault-size driven factor. Even if the range and gross spatial distribution of expected accelerations obtained in our work are comparable to the ones obtained through methods applying seismic catalogues and classical zonation models, the main difference is in the detailed spatial pattern of our PSHA model: our model is characterized by spots of more hazardous area, in correspondence of mapped active faults, while the previous models give expected accelerations almost uniformly distributed in large regions. Finally, we investigate the impact due to the earthquake rates derived from two magnitude-frequency distribution (MFD) model for faults on the hazard result and in respect to the contribution of faults versus distributed seismic activity.

  11. Seismic basement in Poland

    Science.gov (United States)

    Grad, Marek; Polkowski, Marcin

    2016-06-01

    The area of contact between Precambrian and Phanerozoic Europe in Poland has complicated structure of sedimentary cover and basement. The thinnest sedimentary cover in the Mazury-Belarus anteclize is only 0.3-1 km thick, increases to 7-8 km along the East European Craton margin, and 9-12 km in the Trans-European Suture Zone (TESZ). The Variscan domain is characterized by a 1- to 2-km-thick sedimentary cover, while the Carpathians are characterized by very thick sediments, up to c. 20 km. The map of the basement depth is created by combining data from geological boreholes with a set of regional seismic refraction profiles. These maps do not provide data about the basement depth in the central part of the TESZ and in the Carpathians. Therefore, the data set is supplemented by 32 models from deep seismic sounding profiles and a map of a high-resistivity (low-conductivity) layer from magnetotelluric soundings, identified as a basement. All of these data provide knowledge about the basement depth and of P-wave seismic velocities of the crystalline and consolidated type of basement for the whole area of Poland. Finally, the differentiation of the basement depth and velocity is discussed with respect to geophysical fields and the tectonic division of the area.

  12. Frozen Gaussian approximation for three-dimensional seismic wave propagation

    Science.gov (United States)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2016-09-01

    We present a systematic introduction on applying frozen Gaussian approximation (FGA) to compute synthetic seismograms in three-dimensional earth models. In this method, seismic wavefield is decomposed into frozen (fixed-width) Gaussian functions, which propagate along ray paths. Rather than the coherent state solution to the wave equation, this method is rigorously derived by asymptotic expansion on phase plane, with analysis of its accuracy determined by the ratio of short wavelength over large domain size. Similar to other ray-based beam methods (e.g. Gaussian beam methods), one can use relatively small number of Gaussians to get accurate approximations of high-frequency wavefield. The algorithm is embarrassingly parallel, which can drastically speed up the computation with a multicore-processor computer station. We illustrate the accuracy and efficiency of the method by comparing it to the spectral element method for a three-dimensional (3D) seismic wave propagation in homogeneous media, where one has the analytical solution as a benchmark. As another proof of methodology, simulations of high-frequency seismic wave propagation in heterogeneous media are performed for 3D waveguide model and smoothed Marmousi model respectively. The second contribution of this paper is that, we incorporate the Snell's law into the FGA formulation, and asymptotically derive reflection, transmission and free surface conditions for FGA to compute high-frequency seismic wave propagation in high contrast media. We numerically test these conditions by computing traveltime kernels of different phases in the 3D crust-over-mantle model.

  13. Frozen Gaussian approximation for 3-D seismic wave propagation

    Science.gov (United States)

    Chai, Lihui; Tong, Ping; Yang, Xu

    2017-01-01

    We present a systematic introduction on applying frozen Gaussian approximation (FGA) to compute synthetic seismograms in 3-D earth models. In this method, seismic wavefield is decomposed into frozen (fixed-width) Gaussian functions, which propagate along ray paths. Rather than the coherent state solution to the wave equation, this method is rigorously derived by asymptotic expansion on phase plane, with analysis of its accuracy determined by the ratio of short wavelength over large domain size. Similar to other ray-based beam methods (e.g. Gaussian beam methods), one can use relatively small number of Gaussians to get accurate approximations of high-frequency wavefield. The algorithm is embarrassingly parallel, which can drastically speed up the computation with a multicore-processor computer station. We illustrate the accuracy and efficiency of the method by comparing it to the spectral element method for a 3-D seismic wave propagation in homogeneous media, where one has the analytical solution as a benchmark. As another proof of methodology, simulations of high-frequency seismic wave propagation in heterogeneous media are performed for 3-D waveguide model and smoothed Marmousi model, respectively. The second contribution of this paper is that, we incorporate the Snell's law into the FGA formulation, and asymptotically derive reflection, transmission and free surface conditions for FGA to compute high-frequency seismic wave propagation in high contrast media. We numerically test these conditions by computing traveltime kernels of different phases in the 3-D crust-over-mantle model.

  14. Seismic Prediction While Drilling (SPWD): Seismic exploration ahead of the drill bit using phased array sources

    Science.gov (United States)

    Jaksch, Katrin; Giese, Rüdiger; Kopf, Matthias

    2010-05-01

    In the case of drilling for deep reservoirs previous exploration is indispensable. In recent years the focus shifted more on geological structures like small layers or hydrothermal fault systems. Beside 2D- or 3D-seismics from the surface and seismic measurements like Vertical Seismic Profile (VSP) or Seismic While Drilling (SWD) within a borehole these methods cannot always resolute this structures. The resolution is worsen the deeper and smaller the sought-after structures are. So, potential horizons like small layers in oil exploration or fault zones usable for geothermal energy production could be failed or not identified while drilling. The application of a device to explore the geology with a high resolution ahead of the drill bit in direction of drilling would be of high importance. Such a device would allow adjusting the drilling path according to the real geology and would minimize the risk of discovery and hence the costs for drilling. Within the project SPWD a device for seismic exploration ahead of the drill bit will be developed. This device should allow the seismic exploration to predict areas about 50 to 100 meters ahead of the drill bit with a resolution of one meter. At the GFZ a first prototype consisting of different units for seismic sources, receivers and data loggers has been designed and manufactured. As seismic sources four standard magnetostrictive actuators and as receivers four 3-component-geophones are used. Every unit, actuator or geophone, can be rotated in steps of 15° around the longitudinal axis of the prototype to test different measurement configurations. The SPWD prototype emits signal frequencies of about 500 up to 5000 Hz which are significant higher than in VSP and SWD. An increased radiation of seismic wave energy in the direction of the borehole axis allows the view in areas to be drilled. Therefore, every actuator must be controlled independently of each other regarding to amplitude and phase of the source signal to

  15. Seismic-hazard maps for the conterminous United States, 2014

    Science.gov (United States)

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward H.; Chen, Rui; Luco, Nicolas; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.; Rukstales, Kenneth S.

    2015-01-01

    The maps presented here provide an update to the 2008 data contained in U.S Geological Survey Scientific Investigations Map 3195 (http://pubs.usgs.gov/sim/3195/).Probabilistic seismic-hazard maps were prepared for the conterminous United States for 2014 portraying peak horizontal acceleration and horizontal spectral response acceleration for 0.2- and 1.0-second periods with probabilities of exceedance of 10 percent in 50 years and 2 percent in 50 years. All of the maps were prepared by combining the hazard derived from spatially smoothed historical seismicity with the hazard from fault-specific sources. The acceleration values contoured are the random horizontal component. The reference site condition is firm rock, defined as having an average shear-wave velocity of 760 m/s in the top 30 meters corresponding to the boundary between NEHRP (National Earthquake Hazards Reduction program) site classes B and C.

  16. Seismic hazard assessment of Iran

    Directory of Open Access Journals (Sweden)

    M. Ghafory-Ashtiany

    1999-06-01

    Full Text Available The development of the new seismic hazard map of Iran is based on probabilistic seismic hazard computation using the historical earthquakes data, geology, tectonics, fault activity and seismic source models in Iran. These maps have been prepared to indicate the earthquake hazard of Iran in the form of iso-acceleration contour lines, and seismic hazard zoning, by using current probabilistic procedures. They display the probabilistic estimates of Peak Ground Acceleration (PGA for the return periods of 75 and 475 years. The maps have been divided into intervals of 0.25 degrees in both latitudinal and longitudinal directions to calculate the peak ground acceleration values at each grid point and draw the seismic hazard curves. The results presented in this study will provide the basis for the preparation of seismic risk maps, the estimation of earthquake insurance premiums, and the preliminary site evaluation of critical facilities.

  17. Seismic Imager Space Telescope

    Science.gov (United States)

    Sidick, Erkin; Coste, Keith; Cunningham, J.; Sievers,Michael W.; Agnes, Gregory S.; Polanco, Otto R.; Green, Joseph J.; Cameron, Bruce A.; Redding, David C.; Avouac, Jean Philippe; Ampuero, Jean Paul; Leprince, Sebastien; Michel, Remi

    2012-01-01

    A concept has been developed for a geostationary seismic imager (GSI), a space telescope in geostationary orbit above the Pacific coast of the Americas that would provide movies of many large earthquakes occurring in the area from Southern Chile to Southern Alaska. The GSI movies would cover a field of view as long as 300 km, at a spatial resolution of 3 to 15 m and a temporal resolution of 1 to 2 Hz, which is sufficient for accurate measurement of surface displacements and photometric changes induced by seismic waves. Computer processing of the movie images would exploit these dynamic changes to accurately measure the rapidly evolving surface waves and surface ruptures as they happen. These measurements would provide key information to advance the understanding of the mechanisms governing earthquake ruptures, and the propagation and arrest of damaging seismic waves. GSI operational strategy is to react to earthquakes detected by ground seismometers, slewing the satellite to point at the epicenters of earthquakes above a certain magnitude. Some of these earthquakes will be foreshocks of larger earthquakes; these will be observed, as the spacecraft would have been pointed in the right direction. This strategy was tested against the historical record for the Pacific coast of the Americas, from 1973 until the present. Based on the seismicity recorded during this time period, a GSI mission with a lifetime of 10 years could have been in position to observe at least 13 (22 on average) earthquakes of magnitude larger than 6, and at least one (2 on average) earthquake of magnitude larger than 7. A GSI would provide data unprecedented in its extent and temporal and spatial resolution. It would provide this data for some of the world's most seismically active regions, and do so better and at a lower cost than could be done with ground-based instrumentation. A GSI would revolutionize the understanding of earthquake dynamics, perhaps leading ultimately to effective warning

  18. Seismic capacity of a reinforced concrete frame structure without seismic detailing and limited ductility seismic design in moderate seismicity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. K.; Kim, I. H. [Seoul National Univ., Seoul (Korea, Republic of)

    1999-10-01

    A four-story reinforced concrete frame building model is designed for the gravity loads only. Static nonlinear pushover analyses are performed in two orthogonal horizontal directions. The overall capacity curves are converted into ADRS spectra and compared with demand spectra. At several points the deformed shape, moment and shear distribution are calculated. Based on these results limited ductility seismic design concept is proposed as an alternative seismic design approach in moderate seismicity resign.

  19. New Codes for Ambient Seismic Noise Analysis

    Science.gov (United States)

    Duret, F.; Mooney, W. D.; Detweiler, S.

    2007-12-01

    In order to determine a velocity model of the crust, scientists generally use earthquakes recorded by seismic stations. However earthquakes do not occur continuously and most are too weak to be useful. When no event is recorded, a waveform is generally considered to be noise. This noise, however, is not useless and carries a wealth of information. Thus, ambient seismic noise analysis is an inverse method of investigating the Earth's interior. Until recently, this technique was quite difficult to apply, as it requires significant computing capacities. In early 2007, however, a team led by Gregory Benson and Mike Ritzwoller from UC Boulder published a paper describing a new method for extracting group and phase velocities from those waveforms. The analysis consisting of recovering Green functions between a pair of stations, is composed of four steps: 1) single station data preparation, 2) cross-correlation and stacking, 3) quality control and data selection and 4) dispersion measurements. At the USGS, we developed a set of ready-to-use computing codes for analyzing waveforms to run the ambient noise analysis of Benson et al. (2007). Our main contribution to the analysis technique was to fully automate the process. The computation codes were written in Fortran 90 and the automation scripts were written in Perl. Furthermore, some operations were run with SAC. Our choices of programming language offer an opportunity to adapt our codes to the major platforms. The codes were developed under Linux but are meant to be adapted to Mac OS X and Windows platforms. The codes have been tested on Southern California data and our results compare nicely with those from the UC Boulder team. Next, we plan to apply our codes to Indonesian data, so that we might take advantage of newly upgraded seismic stations in that region.

  20. Smooth horizons and quantum ripples

    CERN Document Server

    Golovnev, Alexey

    2014-01-01

    Black Holes are unique objects which allow for meaningful theoretical studies of strong gravity and even quantum gravity effects. An infalling and a distant observer would have very different views on the structure of the world. However, a careful analysis has shown that it entails no genuine contradictions for physics, and the paradigm of observer complementarity has been coined. Recently this picture was put into doubt. In particular, it was argued that in old Black Holes a firewall must form in order to protect the basic principles of quantum mechanics. This AMPS paradox has already been discussed in a vast number of papers with different attitudes and conclusions. Here we want to argue that a possible source of confusion is neglection of quantum gravity effects. Contrary to widespread perception, it does not necessarily mean that effective field theory is inapplicable in rather smooth neighbourhoods of large Black Hole horizons. The real offender might be an attempt to consistently use it over the huge di...

  1. Local Transfer Coefficient, Smooth Channel

    Directory of Open Access Journals (Sweden)

    R. T. Kukreja

    1998-01-01

    Full Text Available Naphthalene sublimation technique and the heat/mass transfer analogy are used to determine the detailed local heat/mass transfer distributions on the leading and trailing walls of a twopass square channel with smooth walls that rotates about a perpendicular axis. Since the variation of density is small in the flow through the channel, buoyancy effect is negligible. Results show that, in both the stationary and rotating channel cases, very large spanwise variations of the mass transfer exist in he turn and in the region immediately downstream of the turn in the second straight pass. In the first straight pass, the rotation-induced Coriolis forces reduce the mass transfer on the leading wall and increase the mass transfer on the trailing wall. In the turn, rotation significantly increases the mass transfer on the leading wall, especially in the upstream half of the turn. Rotation also increases the mass transfer on the trailing wall, more in the downstream half of the turn than in the upstream half of the turn. Immediately downstream of the turn, rotation causes the mass transfer to be much higher on the trailing wall near the downstream corner of the tip of the inner wall than on the opposite leading wall. The mass transfer in the second pass is higher on the leading wall than on the trailing wall. A slower flow causes higher mass transfer enhancement in the turn on both the leading and trailing walls.

  2. Smooth horizons and quantum ripples

    Energy Technology Data Exchange (ETDEWEB)

    Golovnev, Alexey [Saint Petersburg State University, High Energy Physics Department, Saint-Petersburg (Russian Federation)

    2015-05-15

    Black holes are unique objects which allow for meaningful theoretical studies of strong gravity and even quantum gravity effects. An infalling and a distant observer would have very different views on the structure of the world. However, a careful analysis has shown that it entails no genuine contradictions for physics, and the paradigm of observer complementarity has been coined. Recently this picture was put into doubt. In particular, it was argued that in old black holes a firewall must form in order to protect the basic principles of quantum mechanics. This AMPS paradox has already been discussed in a vast number of papers with different attitudes and conclusions. Here we want to argue that a possible source of confusion is the neglect of quantum gravity effects. Contrary to widespread perception, it does not necessarily mean that effective field theory is inapplicable in rather smooth neighbourhoods of large black hole horizons. The real offender might be an attempt to consistently use it over the huge distances from the near-horizon zone of old black holes to the early radiation. We give simple estimates to support this viewpoint and show how the Page time and (somewhat more speculative) scrambling time do appear. (orig.)

  3. Smoothed particle hydrodynamics and magnetohydrodynamics

    Science.gov (United States)

    Price, Daniel J.

    2012-02-01

    This paper presents an overview and introduction to smoothed particle hydrodynamics and magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several 'urban myths' regarding SPH, in particular the idea that one can simply increase the 'neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

  4. NDSPMHD Smoothed Particle Magnetohydrodynamics Code

    Science.gov (United States)

    Price, Daniel J.

    2011-01-01

    This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several 'urban myths' regarding SPH, in particular the idea that one can simply increase the 'neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

  5. A simple algorithm for sequentially incorporating gravity observations in seismic traveltime tomography

    Science.gov (United States)

    Parsons, T.; Blakely, R.J.; Brocher, T.M.

    2001-01-01

    The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.

  6. Modelling of NW Himalayan Seismicity

    Science.gov (United States)

    Bansal, A. R.; Dimri, V. P.

    2014-12-01

    The northwest Himalaya is seismicity active region due to the collision of Indian and Eurasian plates and experienced many large earthquakes in past. A systematic analysis of seismicity is useful for seismic hazard estimation of the region. We analyzed the seismicity of northwestern Himalaya since 1980. The magnitude of completeness of the catalogue is carried out using different methods and found as 3.0. A large difference in magnitude of completeness is found using different methods and a reliable value is obtained after testing the distribution of magnitudes with time. The region is prone to large earthquake and many studied have shown that seismic activation or quiescence takes place before large earthquakes. We studied such behavior of seismicity based on Epidemic Type Aftershock Sequence (ETAS) model and found that a stationary ETAS model is more suitable for modelling the seismicity of this region. The earthquake catalogue is de-clustered using stochasting approach to study behavior of background and triggered seismicity. The triggered seismicity is found to have shallower depths as compared to the background events.

  7. Flat lens for seismic waves

    CERN Document Server

    Brule, Stephane; Guenneau, Sebastien

    2016-01-01

    A prerequisite for achieving seismic invisibility is to demonstrate the ability of civil engineers to control seismic waves with artificially structured soils. We carry out large-scale field tests with a structured soil made of a grid consisting of cylindrical and vertical holes in the ground and a low frequency artificial source (< 10 Hz). This allows the identification of a distribution of energy inside the grid, which can be interpreted as the consequence of an effective negative refraction index. Such a flat lens reminiscent of what Veselago and Pendry envisioned for light opens avenues in seismic metamaterials to counteract the most devastating components of seismic signals.

  8. Neural networks in seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  9. Smoothing a Piecewise-Smooth: An Example from Plankton Population Dynamics

    DEFF Research Database (Denmark)

    Piltz, Sofia Helena

    2016-01-01

    In this work we discuss a piecewise-smooth dynamical system inspired by plankton observations and constructed for one predator switching its diet between two different types of prey. We then discuss two smooth formulations of the piecewise-smooth model obtained by using a hyperbolic tangent...

  10. Thermal smoothing of rough surfaces in vacuo

    Science.gov (United States)

    Wahl, G.

    1986-01-01

    The derivation of equations governing the smoothing of rough surfaces, based on Mullins' (1957, 1960, and 1963) theories of thermal grooving and of capillarity-governed solid surface morphology is presented. As an example, the smoothing of a one-dimensional sine-shaped surface is discussed.

  11. Smoothing the output from a DAC

    Science.gov (United States)

    Wagner, C.

    1980-01-01

    Circuit smooths stepped waveform from analog-to-digital converter without appreciable phase shift between stepped input signal and smoothed output signal and without any effect from stepping rate. Waveform produced is suitable for driving controls used in manufacturing processes, aerospace systems, and automobiles.

  12. Real-time topological image smoothing on shared memory parallel machines

    Science.gov (United States)

    Mahmoudi, Ramzi; Akil, Mohamed

    2011-03-01

    Smoothing filter is the method of choice for image preprocessing and pattern recognition. We present a new concurrent method for smoothing 2D object in binary case. Proposed method provides a parallel computation while preserving the topology by using homotopic transformations. We introduce an adapted parallelization strategy called split, distribute and merge (SDM) strategy which allows efficient parallelization of a large class of topological operators including, mainly, smoothing, skeletonization, and watershed algorithms. To achieve a good speedup, we cared about task scheduling. Distributed work during smoothing process is done by a variable number of threads. Tests on 2D binary image (512*512), using shared memory parallel machine (SMPM) with 8 CPU cores (2× Xeon E5405 running at frequency of 2 GHz), showed an enhancement of 5.2 thus a cadency of 32 images per second is achieved.

  13. Convective Heat-Transfer Characteristics of Laminar Flow Through Smooth- and Rough-Wall Microchannels

    Science.gov (United States)

    Natrajan, V. K.; Christensen, K. T.

    2009-11-01

    The convective heat-transfer behavior of laminar flow through smooth- and rough-wall microchannels is investigated by performing non-intrusive measurements of fluid temperature using a microscale adaptation of two-color laser-induced fluorescent thermometry for flow through a heated copper microchannel testbed of hydraulic diameter Dh=600,μm. These measurements, in concert with pressure-drop measurements, are performed for a smooth-wall case and two different rough-wall cases with roughness that is reminiscent of the surface irregularities one might encounter due to imperfect fabrication methods. Pressure-drop measurements reveal the onset of transition above Recr=1800 for the smooth-wall case and deviation from laminar behavior at progressively lower Re with increasing surface roughness. The local Nusselt number (Nu) for smooth-wall flow over the range 200flow.

  14. The influence of backfill on seismicity

    CSIR Research Space (South Africa)

    Hemp, DA

    1990-09-01

    Full Text Available , that the seismicity has been reduced in areas where backfill had been placed. A factor complicating the evaluation of backfill on seismicity is the effect of geological structures on seismicity....

  15. Smoothed dynamics in the central field problem

    CERN Document Server

    Santoprete, Manuele

    2009-01-01

    Consider the motion of a material point of unit mass in a central field determined by a homogeneous potential of the form $(-1/r^{\\alpha})$, $\\alpha>0,$ where $r$ being the distance to the centre of the field. Due to the singularity at $r=0,$ in computer-based simulations, usually, the potential is replaced by a similar potential that is smooth, or at least continuous. In this paper, we compare the global flows given by the smoothed and non-smoothed potentials. It is shown that the two flows are topologically equivalent for $\\alpha < 2,$ while for $\\alpha \\geq 2,$ smoothing introduces fake orbits. Further, we argue that for $\\alpha\\geq 2,$ smoothing should be applied to the amended potential $c/(2r^2)-1/r^{\\alpha},$ where $c$ denotes the angular momentum constant.

  16. Cursive writing with smooth pursuit eye movements.

    Science.gov (United States)

    Lorenceau, Jean

    2012-08-21

    The eyes never cease to move: ballistic saccades quickly turn the gaze toward peripheral targets, whereas smooth pursuit maintains moving targets on the fovea where visual acuity is best. Despite the oculomotor system being endowed with exquisite motor abilities, any attempt to generate smooth eye movements against a static background results in saccadic eye movements. Although exceptions to this rule have been reported, volitional control over smooth eye movements is at best rudimentary. Here, I introduce a novel, temporally modulated visual display, which, although static, sustains smooth eye movements in arbitrary directions. After brief training, participants gain volitional control over smooth pursuit eye movements and can generate digits, letters, words, or drawings at will. For persons deprived of limb movement, this offers a fast, creative, and personal means of linguistic and emotional expression. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Improved Edge Awareness in Discontinuity Preserving Smoothing

    CERN Document Server

    Heinrich, Stuart B

    2011-01-01

    Discontinuity preserving smoothing is a fundamentally important procedure that is useful in a wide variety of image processing contexts. It is directly useful for noise reduction, and frequently used as an intermediate step in higher level algorithms. For example, it can be particularly useful in edge detection and segmentation. Three well known algorithms for discontinuity preserving smoothing are nonlinear anisotropic diffusion, bilateral filtering, and mean shift filtering. Although slight differences make them each better suited to different tasks, all are designed to preserve discontinuities while smoothing. However, none of them satisfy this goal perfectly: they each have exception cases in which smoothing may occur across hard edges. The principal contribution of this paper is the identification of a property we call edge awareness that should be satisfied by any discontinuity preserving smoothing algorithm. This constraint can be incorporated into existing algorithms to improve quality, and usually ha...

  18. Intelligent Video Traffic Smooth Mechanism for Multimedia Communication

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A principal challenge in supporting real-time video services over ATM is the need to provide synchronous play-out in the face of stochastic end-to-end network delays. In this paper, an intelligent traffic smooth mechanism ( ITSM ) is proposed to meet the continuity requirement which is composed of a back-propagation neural network ( BPNN ) traffic predictor, a play-out buffer, and a fuzzy neural network ( FNN ) based play-out rate determinator. The BPNN traffic predictor online predicts the mean packet rate of the traffic in the future interval ( FI ) and the FNN is designed to adaptively determinate the play-out time according to the number of packets in the buffer and the traffic character predicted. Simulation results show that compared to the window mechanism, ITSM achieves high continuity with accepted delay. Furthermore, ITSM can be adaptively modified to meet the QoS of different kinds of services by FNN parameter training.

  19. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    Science.gov (United States)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  20. SMACK - SMOOTHING FOR AIRCRAFT KINEMATICS

    Science.gov (United States)

    Bach, R.

    1994-01-01

    The computer program SMACK (SMoothing for AirCraft Kinematics) is designed to provide flightpath reconstruction of aircraft forces and motions from measurements that are noisy or incomplete. Additionally, SMACK provides a check on instrument accuracy and data consistency. The program can be used to analyze data from flight-test experiments prior to their use in performance, stability and control, or aerodynamic modeling calculations. It can also be used in the analysis of aircraft accidents, where the actual forces and motions may have to be determined from a very limited data set. Application of a state-estimation method for flightpath reconstruction is possible because aircraft forces and motions are related by well-known equations of motion. The task of postflight state estimation is known as a nonlinear, fixed-interval smoothing problem. SMACK utilizes a backward-filter, forward-smoother algorithm to solve the problem. The equations of motion are used to produce estimates that are compared with their corresponding measurement time histories. The procedure is iterative, providing improved state estimates until a minimum squared-error measure is achieved. In the SMACK program, the state and measurement models together represent a finite-difference approximation for the six-degree-of-freedom dynamics of a rigid body. The models are used to generate time histories which are likely to be found in a flight-test measurement set. These include onboard variables such as Euler angles, angular rates, and linear accelerations as well as tracking variables such as slant range, bearing, and elevation. Any bias or scale-factor errors associated with the state or measurement models are appended to the state vector and treated as constant but unknown parameters. The SMACK documentation covers the derivation of the solution algorithm, describes the state and measurement models, and presents several application examples that should help the analyst recognize the potential

  1. Denoising functional MR images : A comparison of wavelet denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, Alle Meije; Roerdink, Jos B.T.M.

    2004-01-01

    We present a general wavelet-based denoising scheme for functional magnetic resonance imaging (fMRI) data and compare it to Gaussian smoothing, the traditional denoising method used in fMRI analysis. One-dimensional WaveLab thresholding routines were adapted to two-dimensional images, and applied to

  2. A mechanism for arteriolar remodeling based on maintenance of smooth muscle cell activation

    DEFF Research Database (Denmark)

    Jacobsen, Jens Christian Brings; Mulvany, Michael John; Holstein-Rathlou, N.-H.

    2008-01-01

    Structural adaptation in arterioles is part of normal vascular physiology but is also seen in disease states such as hypertension. Smooth muscle cell (SMC) activation has been shown to be central to microvascular remodeling. We hypothesize that, in a remodeling process driven by SMC activation...

  3. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  4. Seismic failure modes and seismic safety of Hardfill dam

    Institute of Scientific and Technical Information of China (English)

    Kun XIONG; Yong-hong WENG; Yun-long HE

    2013-01-01

    Based on microscopic damage theory and the finite element method, and using the Weibull distribution to characterize the random distribution of the mechanical properties of materials, the seismic response of a typical Hardfill dam was analyzed through numerical simulation during the earthquakes with intensities of 8 degrees and even greater. The seismic failure modes and failure mechanism of the dam were explored as well. Numerical results show that the Hardfill dam remains at a low stress level and undamaged or slightly damaged during an earthquake with an intensity of 8 degrees. During overload earthquakes, tensile cracks occur at the dam surfaces and extend to inside the dam body, and the upstream dam body experiences more serious damage than the downstream dam body. Therefore, under the seismic conditions, the failure pattern of the Hardfill dam is the tensile fracture of the upstream regions and the dam toe. Compared with traditional gravity dams, Hardfill dams have better seismic performance and greater seismic safety.

  5. Assessing the Seismic Potential Hazard of the Makran Subduction Zone

    Science.gov (United States)

    Frohling, E.; Szeliga, W. M.; Melbourne, T. I.; Abolghasem, A.; Lodi, S. H.

    2013-12-01

    Long quiescent subduction zones like the Makran, Sunda, and Cascadia, which have long recurrence intervals for large (> Mw 8) earthquakes, often have poorly known seismic histories and are particularly vulnerable and often ill-prepared. The Makran subduction zone has not been studied extensively, but the 1945 Mw 8.1 earthquake and subsequent tsunami, as well as more recent mid magnitude, intermediate depth (50-100 km) seismicity, demonstrates the active seismic nature of the region. Recent increases in regional GPS and seismic monitoring now permit the modeling of strain accumulations and seismic potential of the Makran subduction zone. Subduction zone seismicity indicates that the eastern half of the Makran is presently more active than the western half. It has been hypothesized that the relative quiescence of the western half is due to aseismic behavior. However, based on GPS evidence, the entire subduction zone generally appears to be coupled and has been accumulating stress that could be released in another > 8.0 Mw earthquake. To assess the degree of coupling, we utilize existing GPS data to create a fault coupling model for the Makran using a preliminary 2-D fault geometry derived from ISC hypocenters. Our 2-D modeling is done using the backslip approach and defines the parameters in our coupling model; we forego the generation of a 3-D model due to the low spatial density of available GPS data. We compare the use of both NUVEL-1A plate motions and modern Arabian plate motions derived from GPS station velocities in Oman to drive subduction for our fault coupling model. To avoid non-physical inversion results, we impose second order smoothing to eliminate steep strain gradients. The fit of the modeled inter-seismic deformation vectors are assessed against the observed strain from the GPS data. Initial observations indicate that the entire subduction zone is currently locked and accumulating strain, with no identifiable gaps in the interseismic locking

  6. Elastic-Wavefield Seismic Stratigraphy: A New Seismic Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Bob A. Hardage; Milo M. Backus; Michael V. DeAngelo; Sergey Fomel; Khaled Fouad; Robert J. Graebner; Paul E. Murray; Randy Remington; Diana Sava

    2006-07-31

    The purpose of our research has been to develop and demonstrate a seismic technology that will provide the oil and gas industry a better methodology for understanding reservoir and seal architectures and for improving interpretations of hydrocarbon systems. Our research goal was to expand the valuable science of seismic stratigraphy beyond the constraints of compressional (P-P) seismic data by using all modes (P-P, P-SV, SH-SH, SV-SV, SV-P) of a seismic elastic wavefield to define depositional sequences and facies. Our objective was to demonstrate that one or more modes of an elastic wavefield may image stratal surfaces across some stratigraphic intervals that are not seen by companion wave modes and thus provide different, but equally valid, information regarding depositional sequences and sedimentary facies within that interval. We use the term elastic wavefield stratigraphy to describe the methodology we use to integrate seismic sequences and seismic facies from all modes of an elastic wavefield into a seismic interpretation. We interpreted both onshore and marine multicomponent seismic surveys to select the data examples that we use to document the principles of elastic wavefield stratigraphy. We have also used examples from published papers that illustrate some concepts better than did the multicomponent seismic data that were available for our analysis. In each interpretation study, we used rock physics modeling to explain how and why certain geological conditions caused differences in P and S reflectivities that resulted in P-wave seismic sequences and facies being different from depth-equivalent S-wave sequences and facies across the targets we studied.

  7. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  8. Seismic risk perception test

    Science.gov (United States)

    Crescimbene, Massimo; La Longa, Federica; Camassi, Romano; Pino, Nicola Alessandro

    2013-04-01

    The perception of risks involves the process of collecting, selecting and interpreting signals about uncertain impacts of events, activities or technologies. In the natural sciences the term risk seems to be clearly defined, it means the probability distribution of adverse effects, but the everyday use of risk has different connotations (Renn, 2008). The two terms, hazards and risks, are often used interchangeably by the public. Knowledge, experience, values, attitudes and feelings all influence the thinking and judgement of people about the seriousness and acceptability of risks. Within the social sciences however the terminology of 'risk perception' has become the conventional standard (Slovic, 1987). The mental models and other psychological mechanisms which people use to judge risks (such as cognitive heuristics and risk images) are internalized through social and cultural learning and constantly moderated (reinforced, modified, amplified or attenuated) by media reports, peer influences and other communication processes (Morgan et al., 2001). Yet, a theory of risk perception that offers an integrative, as well as empirically valid, approach to understanding and explaining risk perception is still missing". To understand the perception of risk is necessary to consider several areas: social, psychological, cultural, and their interactions. Among the various research in an international context on the perception of natural hazards, it seemed promising the approach with the method of semantic differential (Osgood, C.E., Suci, G., & Tannenbaum, P. 1957, The measurement of meaning. Urbana, IL: University of Illinois Press). The test on seismic risk perception has been constructed by the method of the semantic differential. To compare opposite adjectives or terms has been used a Likert's scale to seven point. The test consists of an informative part and six sections respectively dedicated to: hazard; vulnerability (home and workplace); exposed value (with reference to

  9. Passive seismic experiment.

    Science.gov (United States)

    Latham, G V; Ewing, M; Press, F; Sutton, G; Dorman, J; Nakamura, Y; Toksöz, N; Wiggins, R; Derr, J; Duennebier, F

    1970-01-30

    Seismometer operation for 21 days at Tranquillity Base revealed, among strong signals produced by the Apollo 11 lunar module descent stage, a small proportion of probable natural seismic signals. The latter are long-duration, emergent oscillations which lack the discrete phases and coherence of earthquake signals. From similarity with the impact signal of the Apollo 12 ascent stage, they are thought to be produced by meteoroid impacts or shallow moonquakes. This signal character may imply transmission with high Q and intense wave scattering, conditions which are mutually exclusive on earth. Natural background noise is very much smaller than on earth, and lunar tectonism may be very low.

  10. Seismic risk assessment of Navarre (Northern Spain)

    Science.gov (United States)

    Gaspar-Escribano, J. M.; Rivas-Medina, A.; García Rodríguez, M. J.; Benito, B.; Tsige, M.; Martínez-Díaz, J. J.; Murphy, P.

    2009-04-01

    The RISNA project, financed by the Emergency Agency of Navarre (Northern Spain), aims at assessing the seismic risk of the entire region. The final goal of the project is the definition of emergency plans for future earthquakes. With this purpose, four main topics are covered: seismic hazard characterization, geotechnical classification, vulnerability assessment and damage estimation to structures and exposed population. A geographic information system is used to integrate, analyze and represent all information colleted in the different phases of the study. Expected ground motions on rock conditions with a 90% probability of non-exceedance in an exposure time of 50 years are determined following a Probabilistic Seismic Hazard Assessment (PSHA) methodology that includes a logic tree with different ground motion and source zoning models. As the region under study is located in the boundary between Spain and France, an effort is required to collect and homogenise seismological data from different national and regional agencies. A new homogenised seismic catalogue, merging data from Spanish, French, Catalonian and international agencies and establishing correlations between different magnitude scales, is developed. In addition, a new seismic zoning model focused on the study area is proposed. Results show that the highest ground motions on rock conditions are expected in the northeastern part of the region, decreasing southwards. Seismic hazard can be expressed as low-to-moderate. A geotechnical classification of the entire region is developed based on surface geology, available borehole data and morphotectonic constraints. Frequency-dependent amplification factors, consistent with code values, are proposed. The northern and southern parts of the region are characterized by stiff and soft soils respectively, being the softest soils located along river valleys. Seismic hazard maps including soil effects are obtained by applying these factors to the seismic hazard maps

  11. Approximation of Bivariate Functions via Smooth Extensions

    Science.gov (United States)

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  12. Seismic Data Gathering and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-02-01

    Three recent earthquakes in the last seven years have exceeded their design basis earthquake values (so it is implied that damage to SSC’s should have occurred). These seismic events were recorded at North Anna (August 2011, detailed information provided in [Virginia Electric and Power Company Memo]), Fukushima Daichii and Daini (March 2011 [TEPCO 1]), and Kaswazaki-Kariwa (2007, [TEPCO 2]). However, seismic walk downs at some of these plants indicate that very little damage occurred to safety class systems and components due to the seismic motion. This report presents seismic data gathered for two of the three events mentioned above and recommends a path for using that data for two purposes. One purpose is to determine what margins exist in current industry standard seismic soil-structure interaction (SSI) tools. The second purpose is the use the data to validated seismic site response tools and SSI tools. The gathered data represents free field soil and in-structure acceleration time histories data. Gathered data also includes elastic and dynamic soil properties and structural drawings. Gathering data and comparing with existing models has potential to identify areas of uncertainty that should be removed from current seismic analysis and SPRA approaches. Removing uncertainty (to the extent possible) from SPRA’s will allow NPP owners to make decisions on where to reduce risk. Once a realistic understanding of seismic response is established for a nuclear power plant (NPP) then decisions on needed protective measures, such as SI, can be made.

  13. Procedures for computing site seismicity

    Science.gov (United States)

    Ferritto, John

    1994-02-01

    This report was prepared as part of the Navy's Seismic Hazard Mitigation Program. The Navy has numerous bases located in seismically active regions throughout the world. Safe effective design of waterfront structures requires determining expected earthquake ground motion. The Navy's problem is further complicated by the presence of soft saturated marginal soils that can significantly amplify the levels of seismic shaking as evidenced in the 1989 Loma Prieta earthquake. The Naval Facilities Engineering Command's seismic design manual, NAVFAC P355.l, requires a probabilistic assessment of ground motion for design of essential structures. This report presents the basis for the Navy's Seismic Hazard Analysis procedure that was developed and is intended to be used with the Seismic Hazard Analysis computer program and user's manual. This report also presents data on geology and seismology to establish the background for the seismic hazard model developed. The procedure uses the historical epicenter data base and available geologic data, together with source models, recurrence models, and attenuation relationships to compute the probability distribution of site acceleration and an appropriate spectra. This report discusses the developed stochastic model for seismic hazard evaluation and the associated research.

  14. Key aspects governing induced seismicity

    Science.gov (United States)

    Buijze, Loes; Wassing, Brecht; Fokker, Peter

    2013-04-01

    In the past decades numerous examples of earthquakes induced by human-induced changes in subsurface fluid pressures have been reported. This poses a major threat to the future development of some of these operations and calls for an understanding and quantification of the seismicity generated. From geomechanical considerations and insights from laboratory experiments the factors controlling induced seismicity may be grouped into 4 categories; the magnitude of the stress disturbance, the pre-existing stress conditions, the reservoir/fault rock properties and the local geometry. We investigated whether the (relative) contributions of these factors and their influence on magnitudes generated could be recognized by looking at the entire dataset of reported cases of induced seismicity as a whole, and what this might imply for future developments. An extensive database has been built out of over a 160 known cases of induced seismicity worldwide, incorporating the relevant geological, seismological and fluid-related parameters. The cases studied include hydrocarbon depletion and secondary recovery, waste water injection, (enhanced) geothermal systems and hydraulic fracturing with observed magnitudes ranging from less than -1.5 to 7. The parameters taken into account were based on the theoretical background of the mechanisms of induced seismicity and include the injection/depletion-related parameters, (spatial) characteristics of seismicity, lithological properties and the local stress situation. Correlations between the seismic response and the geological/geomechanical characteristics of the various sites were investigated. The injected/depleted volumes and the scale of the activities are major controlling factors on the maximum magnitudes generated. Spatial signatures of seismicity such as the depth and lateral spread of the seismicity were observed to be distinct for different activities, which is useful when considering future operations. Where available the local

  15. Advances in Rotational Seismic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pierson, Robert [Applied Technology Associates, Albuquerque, NM (United States); Laughlin, Darren [Applied Technology Associates, Albuquerque, NM (United States); Brune, Robert [Applied Technology Associates, Albuquerque, NM (United States)

    2016-10-19

    Rotational motion is increasingly understood to be a significant part of seismic wave motion. Rotations can be important in earthquake strong motion and in Induced Seismicity Monitoring. Rotational seismic data can also enable shear selectivity and improve wavefield sampling for vertical geophones in 3D surveys, among other applications. However, sensor technology has been a limiting factor to date. The US Department of Energy (DOE) and Applied Technology Associates (ATA) are funding a multi-year project that is now entering Phase 2 to develop and deploy a new generation of rotational sensors for validation of rotational seismic applications. Initial focus is on induced seismicity monitoring, particularly for Enhanced Geothermal Systems (EGS) with fracturing. The sensors employ Magnetohydrodynamic (MHD) principles with broadband response, improved noise floors, robustness, and repeatability. This paper presents a summary of Phase 1 results and Phase 2 status.

  16. Dual-support Smoothed Particle Hydrodynamics

    CERN Document Server

    Ren, Huilong; Zhuang, Xiaoying; Rabczuk, Timon

    2016-01-01

    In this paper we develop a dual-support smoothed particle hydrodynamics (DS-SPH) that naturally satisfies the conservation of momentum, angular momentum and energy when the varying smoothing length is utilized. The DS-SPH is based on the concept of dual-support, which is introduced to consider the unbalanced interactions between the particles with different smoothing lengths. Our DS-SPH formulation can be implemented in traditional SPH with little changes and improve the computational efficiency. Several numerical examples are presented to demonstrate the capability of the method.

  17. Seismic moulin tremor

    Science.gov (United States)

    Roeoesli, Claudia; Walter, Fabian; Ampuero, Jean-Paul; Kissling, Edi

    2016-08-01

    Through glacial moulins, meltwater is routed from the glacier surface to its base. Moulins are a main feature feeding subglacial drainage systems and thus influencing basal motion and ice dynamics, but their geometry remains poorly known. Here we show that analysis of the seismic wavefield generated by water falling into a moulin can help constrain its geometry. We present modeling results of hour-long seimic tremors emitted from a vertical moulin shaft, observed with a seismometer array installed at the surface of the Greenland Ice Sheet. The tremor was triggered when the moulin water level exceeded a certain height, which we associate with the threshold for the waterfall to hit directly the surface of the moulin water column. The amplitude of the tremor signal changed over each tremor episode, in close relation to the amount of inflowing water. The tremor spectrum features multiple prominent peaks, whose characteristic frequencies are distributed like the resonant modes of a semiopen organ pipe and were found to depend on the moulin water level, consistent with a source composed of resonant tube waves (water pressure waves coupled to elastic deformation of the moulin walls) along the water-filled moulin pipe. Analysis of surface particle motions lends further support to this interpretation. The seismic wavefield was modeled as a superposition of sustained wave radiation by pressure sources on the side walls and at the bottom of the moulin. The former was found to dominate the wave field at close distance and the latter at large distance to the moulin.

  18. An improved PSO algorithm and its application in seismic wavelet extraction

    Directory of Open Access Journals (Sweden)

    Yongshou Dai

    2011-08-01

    Full Text Available The seismic wavelet estimation is finally a multi-dimension, multi-extreme and multi-parameter optimization problem. PSO is easy to fall into local optimum, which has simple concepts and fast convergence. This paper proposes an improved PSO with adaptive parameters and boundary constraints, in ensuring accuracy of the algorithm optimization and fast convergence. Simulation results show that the methods have good applicability and stability for seismic wavelet extraction.

  19. Harmonized Probabilistic Seismic Hazard Assessment in Europe: Earthquake Geology Applied

    Science.gov (United States)

    Woessner, J.; Danciu, L.; Giardini, D.; Share Consortium

    2012-04-01

    Probabilistic seismic hazard assessment (PSHA) aims to characterize the best available knowledge on seismic hazard of a study area, ideally taking into account all sources of uncertainty. Results from PSHAs form the baseline for informed decision-making and provide essential input to each risk assessment application. SHARE is an EC-FP7 funded project to create a testable time-independent community-based hazard model for the Euro-Mediterranean region. SHARE scientists are creating a model framework and infrastructure for a harmonized PSHA. The results will serve as reference for the Eurocode 8 application and are envisioned to provide homogeneous input for state-of-the art seismic safety assessment for critical industry. Harmonizing hazard is pursued on the input data level and the model building procedure across borders and tectonic features of the European-Mediterranean region. An updated earthquake catalog, a harmonized database of seismogenic sources together with adjusted ground motion prediction equations (GMPEs) form the bases for a borderless assessment. We require transparent and reproducible strategies to estimate parameter values and their uncertainties within the source model assessment and the contributions of the GMPEs. The SHARE model accounts for uncertainties via a logic tree. Epistemic uncertainties within the seismic source-model are represented by four source model options including area sources, fault sources and kernel-smoothing approaches, aleatory uncertainties for activity rates and maximum magnitudes. Epistemic uncertainties for predicted ground motions are considered by multiple GMPEs as a function of tectonic settings and treated as being correlated. For practical implementation, epistemic uncertainties in the source model (i.e. dip and strike angles) are treated as aleatory, and a mean seismicity model is considered. The final results contain the full distribution of ground motion variability. This contribution will feature preliminary

  20. seismic-py: Reading seismic data with Python

    Directory of Open Access Journals (Sweden)

    2008-08-01

    Full Text Available The field of seismic exploration of the Earth has changed
    dramatically over the last half a century. The Society of Exploration
    Geophysicists (SEG has worked to create standards to store the vast
    amounts of seismic data in a way that will be portable across computer
    architectures. However, it has been impossible to predict the needs of the
    immense range of seismic data acquisition systems. As a result, vendors have
    had to bend the rules to accommodate the needs of new instruments and
    experiment types. For low level access to seismic data, there is need for a
    standard open source library to allow access to a wide range of vendor data
    files that can handle all of the variations. A new seismic software package,
    seismic-py, provides an infrastructure for creating and managing drivers for
    each particular format. Drivers can be derived from one of the known formats
    and altered to handle any slight variations. Alternatively drivers can be
    developed from scratch for formats that are very different from any previously
    defined format. Python has been the key to making driver development easy
    and efficient to implement. The goal of seismic-py is to be the base system
    that will power a wide range of experimentation with seismic data and at the
    same time provide clear documentation for the historical record of seismic
    data formats.

  1. Smooth surfaces from rational bilinear patches

    KAUST Repository

    Shi, Ling

    2014-01-01

    Smooth freeform skins from simple panels constitute a challenging topic arising in contemporary architecture. We contribute to this problem area by showing how to approximate a negatively curved surface by smoothly joined rational bilinear patches. The approximation problem is solved with help of a new computational approach to the hyperbolic nets of Huhnen-Venedey and Rörig and optimization algorithms based on it. We also discuss its limits which lie in the topology of the input surface. Finally, freeform deformations based on Darboux transformations are used to generate smooth surfaces from smoothly joined Darboux cyclide patches; in this way we eliminate the restriction to surfaces with negative Gaussian curvature. © 2013 Elsevier B.V.

  2. Fractional Smoothness of Some Stochastic Integrals

    Institute of Scientific and Technical Information of China (English)

    Peng XIE; Xi Cheng ZHANG

    2007-01-01

    We study the fractional smoothness in the sense of Malliavin calculus of stochastic integralsof the form ∫10 φ(Xs)d Xs,where Xs is a semimartingale and φ belongs to some fractional Sobolev spaceover R.

  3. Spectral sequences in smooth generalized cohomology

    CERN Document Server

    Grady, Daniel

    2016-01-01

    We consider spectral sequences in smooth generalized cohomology theories, including differential generalized cohomology theories. The main differential spectral sequences will be of the Atiyah-Hirzebruch (AHSS) type, where we provide a filtration by the Cech resolution of smooth manifolds. This allows for systematic study of torsion in differential cohomology. We apply this in detail to smooth Deligne cohomology, differential topological complex K-theory, and to a smooth extension of integral Morava K-theory that we introduce. In each case we explicitly identify the differentials in the corresponding spectral sequences, which exhibit an interesting and systematic interplay between (refinement of) classical cohomology operations, operations involving differential forms, and operations on cohomology with U(1) coefficients.

  4. Integrated Groups and Smooth Distribution Groups

    Institute of Scientific and Technical Information of China (English)

    Pedro J. MIANA

    2007-01-01

    In this paper, we prove directly that α-times integrated groups define algebra homo-morphisms. We also give a theorem of equivalence between smooth distribution groups and α-times integrated groups.

  5. Cardiac, Skeletal, and smooth muscle mitochondrial respiration

    DEFF Research Database (Denmark)

    Park, Song-Young; Gifford, Jayson R; Andtbacka, Robert H I

    2014-01-01

    Unlike cardiac and skeletal muscle, little is known about vascular smooth muscle mitochondrial function. Therefore, this study examined mitochondrial respiratory rates in the smooth muscle of healthy human feed arteries and compared with that of healthy cardiac and skeletal muscle. Cardiac......, skeletal, and smooth muscle was harvested from a total of 22 subjects (53±6 yrs) and mitochondrial respiration assessed in permeabilized fibers. Complex I+II, state 3 respiration, an index of oxidative phosphorylation capacity, fell progressively from cardiac, skeletal, to smooth muscle (54±1; 39±4; 15......±1 pmol•s(-1)•mg (-1), psmooth muscle (222±13; 115±2; 48±2 umol•g(-1)•min(-1), p

  6. Valiente Kroon's obstructions to smoothness at infinity

    Science.gov (United States)

    Grant, James; Tod, Paul

    2015-03-01

    We conjecture an interpretation in terms of multipole moments of the obstructions to smoothness at infinity found for time-symmetric, conformally-flat initial data by Kroon (Commun Math Phys 244(1):133-156, 2004).

  7. Interval Estimation of Seismic Hazard Parameters

    Science.gov (United States)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2016-11-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  8. Mitigation of earthquake hazards using seismic base isolation systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.Y.

    1994-06-01

    This paper deals with mitigation of earthquake hazards using seismic base-isolation systems. A numerical algorithm is described for system response analysis of isolated structures with laminated elastomer bearings. The focus of this paper is on the adaptation of a nonlinear constitutive equation for the isolation bearing, and the treatment of foundation embedment for the soil-structure-interaction analysis. Sample problems are presented to illustrate the mitigating effect of using base-isolation systems.

  9. An Owner's Guide to Smoothed Particle Hydrodynamics

    OpenAIRE

    Martin, T.J.; Pearce, F. R.; Thomas, P. A.

    1993-01-01

    We present a practical guide to Smoothed Particle Hydrodynamics (\\SPH) and its application to astrophysical problems. Although remarkably robust, \\SPH\\ must be used with care if the results are to be meaningful since the accuracy of \\SPH\\ is sensitive to the arrangement of the particles and the form of the smoothing kernel. In particular, the initial conditions for any \\SPH\\ simulation must consist of particles in dynamic equilibrium. We describe some of the numerical difficulties that may be...

  10. Rubber friction on (apparently) smooth lubricated surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Mofidi, M; Prakash, B [Division of Machine Elements, Luleaa University of Technology, Luleaa SE-97187 (Sweden); Persson, B N J [IFF, FZ-Juelich, 52425 Juelich (Germany); Albohr, O [Pirelli Deutschland AG, 64733 Hoechst/Odenwald, Postfach 1120 (Germany)

    2008-02-27

    We study rubber sliding friction on hard lubricated surfaces. We show that even if the hard surface appears smooth to the naked eye, it may exhibit short-wavelength roughness, which may make the dominant contribution to rubber friction. That is, the observed sliding friction is mainly due to the viscoelastic deformations of the rubber by the counterface surface asperities. The results presented are of great importance for rubber sealing and other rubber applications involving (apparently) smooth surfaces.

  11. Robust chaos in smooth unimodal maps

    Science.gov (United States)

    Andrecut, M.; Ali, M. K.

    2001-08-01

    Robust chaos is defined by the absence of periodic windows and coexisting attractors in some neighborhood of the parameter space. It has been conjectured that robust chaos cannot occur in smooth systems [E. Barreto, B. Hunt, and C. Grebogi, Phys. Rev. Lett. 78, 4561 (1997); 80, 3049 (1998)]. Contrary to this conjecture, we describe a general procedure for generating robust chaos in smooth unimodal maps.

  12. Observability and Controllability for Smooth Nonlinear Systems

    OpenAIRE

    Schaft, A.J. van der

    1982-01-01

    The definition of a smooth nonlinear system as proposed recently, is elaborated as a natural generalization of the more common definitions of a smooth nonlinear input-output system. Minimality for such systems can be defined in a very direct geometric way, and already implies a usual notion of observability, namely, local weak observability. As an application of this theory, it is shown that observable nonlinear Hamiltonian systems are necessarily controllable, and vice versa.

  13. Rubber friction on (apparently) smooth lubricated surfaces

    Science.gov (United States)

    Mofidi, M.; Prakash, B.; Persson, B. N. J.; Albohr, O.

    2008-02-01

    We study rubber sliding friction on hard lubricated surfaces. We show that even if the hard surface appears smooth to the naked eye, it may exhibit short-wavelength roughness, which may make the dominant contribution to rubber friction. That is, the observed sliding friction is mainly due to the viscoelastic deformations of the rubber by the counterface surface asperities. The results presented are of great importance for rubber sealing and other rubber applications involving (apparently) smooth surfaces.

  14. Doing smooth pursuit paradigms in Windows 7

    DEFF Research Database (Denmark)

    Wilms, Inge Linda

    Smooth pursuit eye movements are interesting to study as they reflect the subject’s ability to predict movement of external targets, keep focus and move the eyes appropriately. The process of smooth pursuit requires collaboration between several systems in the brain and the resulting action may p...... in Windows 7 with live capturing of eye movements using a Tobii TX300 eye tracker. In particular, the poster describes the challenges and limitations created by the hardware and the software...

  15. Efficient Smoothing for Boundary Value Models

    Science.gov (United States)

    1989-12-29

    IEEE Transactions on Automatic Control , vol. 29, pp. 803-821, 1984. [2] A. Bagchi and H. Westdijk, "Smoothing...and likelihood ratio for Gaussian boundary value processes," IEEE Transactions on Automatic Control , vol. 34, pp. 954-962, 1989. [3] R. Nikoukhah et...77-96, 1988. [6] H. L. Weinert and U. B. Desai, "On complementary models and fixed- interval smoothing," IEEE Transactions on Automatic Control ,

  16. Beam-smoothing investigation on Heaven I

    Science.gov (United States)

    Xiang, Yi-huai; Gao, Zhi-xing; Tong, Xiao-hui; Dai, Hui; Tang, Xiu-zhang; Shan, Yu-sheng

    2007-01-01

    Directly driven targets for inertial confinement fusion (ICF) require laser beams with extremely smooth irradiance profiles to prevent hydrodynamic instabilities that destroy the spherical symmetry of the target during implosion. Such instabilities can break up and mix together the target's wall and fuel material, preventing it from reaching the density and temperature required for fusion ignition. 1,2 Measurements in the equation of state (EOS) experiments require laser beams with flat-roofed profiles to generate uniform shockwave 3. Some method for beam smooth, is thus needed. A technique called echelon-free induced spatial incoherence (EFISI) is proposed for producing smooth target beam profiles with large KrF lasers. The idea is basically an image projection technique that projects the desired time-averaged spatial profile onto the target via the laser system, using partially coherent broadband lighe. Utilize the technique, we developing beam- smoothing investigation on "Heaven I". At China Institute of Atomic Energy , a new angular multiplexing providing with beam-smoothing function has been developed, the total energy is 158J, the stability of energy is 4%, the pulse duration is 25ns, the effective diameter of focusing spot is 400um, and the ununiformity is about 1.6%, the power density on the target is about 3.7×10 12W/cm2. At present, the system have provided steady and smooth laser irradiation for EOS experiments.

  17. Updated Colombian Seismic Hazard Map

    Science.gov (United States)

    Eraso, J.; Arcila, M.; Romero, J.; Dimate, C.; Bermúdez, M. L.; Alvarado, C.

    2013-05-01

    The Colombian seismic hazard map used by the National Building Code (NSR-98) in effect until 2009 was developed in 1996. Since then, the National Seismological Network of Colombia has improved in both coverage and technology providing fifteen years of additional seismic records. These improvements have allowed a better understanding of the regional geology and tectonics which in addition to the seismic activity in Colombia with destructive effects has motivated the interest and the need to develop a new seismic hazard assessment in this country. Taking advantage of new instrumental information sources such as new broad band stations of the National Seismological Network, new historical seismicity data, standardized global databases availability, and in general, of advances in models and techniques, a new Colombian seismic hazard map was developed. A PSHA model was applied. The use of the PSHA model is because it incorporates the effects of all seismic sources that may affect a particular site solving the uncertainties caused by the parameters and assumptions defined in this kind of studies. First, the seismic sources geometry and a complete and homogeneous seismic catalog were defined; the parameters of seismic rate of each one of the seismic sources occurrence were calculated establishing a national seismotectonic model. Several of attenuation-distance relationships were selected depending on the type of seismicity considered. The seismic hazard was estimated using the CRISIS2007 software created by the Engineering Institute of the Universidad Nacional Autónoma de México -UNAM (National Autonomous University of Mexico). A uniformly spaced grid each 0.1° was used to calculate the peak ground acceleration (PGA) and response spectral values at 0.1, 0.2, 0.3, 0.5, 0.75, 1, 1.5, 2, 2.5 and 3.0 seconds with return periods of 75, 225, 475, 975 and 2475 years. For each site, a uniform hazard spectrum and exceedance rate curves were calculated. With the results, it is

  18. A Kirchhoff approach to seismic modeling and prestack depth migration

    Science.gov (United States)

    Liu, Zhen-Yue

    1993-05-01

    The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.

  19. Midget Seismic in Sandbox Models

    Science.gov (United States)

    Krawczyk, C. M.; Buddensiek, M. L.; Philipp, J.; Kukowski, N.; Oncken, O.

    2008-12-01

    Analog sandbox simulation has been applied to study geological processes to provide qualitative and quantitative insights into specific geological problems. In nature, the structures, which are simulated in those sandbox models, are often inferred from seismic data. With the study introduced here, we want to combine the analog sandbox simulation techniques with seismic physical modeling of those sandbox models. The long-term objectives of this approach are (1) imaging of seismic and seismological events of actively deforming and static 3D analogue models, and (2) assessment of the transferability of the model data to field data in order to improve field data acquisition and interpretation according to the addressed geological problem. To achieve this objective, a new midget-seismic facility for laboratory use was designed and developed, comprising a seismic tank, a PC control unit including piezo-electric transducers, and a positioning system. The first experiments are aimed at studying the wave field properties of the piezo- transducers in order to investigate their feasibility for seismic profiling. The properties investigated are their directionality and the change of waveform due to their size (5-12 mm) compared to the wavelengths (material properties and the effects of wave propagation in an-/isotropic media by physical studies, before we finally start using different seismic imaging and processing techniques on static and actively deforming 3D analog models.

  20. Seismic hazard assessment: Issues and alternatives

    Science.gov (United States)

    Wang, Z.

    2011-01-01

    Seismic hazard and risk are two very important concepts in engineering design and other policy considerations. Although seismic hazard and risk have often been used inter-changeably, they are fundamentally different. Furthermore, seismic risk is more important in engineering design and other policy considerations. Seismic hazard assessment is an effort by earth scientists to quantify seismic hazard and its associated uncertainty in time and space and to provide seismic hazard estimates for seismic risk assessment and other applications. Although seismic hazard assessment is more a scientific issue, it deserves special attention because of its significant implication to society. Two approaches, probabilistic seismic hazard analysis (PSHA) and deterministic seismic hazard analysis (DSHA), are commonly used for seismic hazard assessment. Although PSHA has been pro-claimed as the best approach for seismic hazard assessment, it is scientifically flawed (i.e., the physics and mathematics that PSHA is based on are not valid). Use of PSHA could lead to either unsafe or overly conservative engineering design or public policy, each of which has dire consequences to society. On the other hand, DSHA is a viable approach for seismic hazard assessment even though it has been labeled as unreliable. The biggest drawback of DSHA is that the temporal characteristics (i.e., earthquake frequency of occurrence and the associated uncertainty) are often neglected. An alternative, seismic hazard analysis (SHA), utilizes earthquake science and statistics directly and provides a seismic hazard estimate that can be readily used for seismic risk assessment and other applications. ?? 2010 Springer Basel AG.

  1. Seismic Fracture Characterization Methodologies for Enhanced Geothermal Systems

    Energy Technology Data Exchange (ETDEWEB)

    Queen, John H. [Hi-Geophysical, Inc., Ponca, OK (United States)

    2016-05-09

    Executive Summary The overall objective of this work was the development of surface and borehole seismic methodologies using both compressional and shear waves for characterizing faults and fractures in Enhanced Geothermal Systems. We used both surface seismic and vertical seismic profile (VSP) methods. We adapted these methods to the unique conditions encountered in Enhanced Geothermal Systems (EGS) creation. These conditions include geological environments with volcanic cover, highly altered rocks, severe structure, extreme near surface velocity contrasts and lack of distinct velocity contrasts at depth. One of the objectives was the development of methods for identifying more appropriate seismic acquisition parameters for overcoming problems associated with these geological factors. Because temperatures up to 300º C are often encountered in these systems, another objective was the testing of VSP borehole tools capable of operating at depths in excess of 1,000 m and at temperatures in excess of 200º C. A final objective was the development of new processing and interpretation techniques based on scattering and time-frequency analysis, as well as the application of modern seismic migration imaging algorithms to seismic data acquired over geothermal areas. The use of surface seismic reflection data at Brady's Hot Springs was found useful in building a geological model, but only when combined with other extensive geological and geophysical data. The use of fine source and geophone spacing was critical in producing useful images. The surface seismic reflection data gave no information about the internal structure (extent, thickness and filling) of faults and fractures, and modeling suggests that they are unlikely to do so. Time-frequency analysis was applied to these data, but was not found to be significantly useful in their interpretation. Modeling does indicate that VSP and other seismic methods with sensors located at depth in wells will be the most

  2. Seismic modeling of carbonate outcrops

    Energy Technology Data Exchange (ETDEWEB)

    Stafleu, J.; Schlager, W.; Campbell, E.; Everts, A.J. (Vrije Universiteit, Amsterdam (Netherlands))

    1993-09-01

    Traditionally, seismic modeling has concentrated on one-dimensional borehole modeling and two-dimensional forward modeling of basic structural-stratigraphic schemes, which are directly compared with real seismic data. Two-dimensional seismic models based on outcrop observations may aid in bridging the gap between the detail of the outcrop and the low resolution of seismic lines. Examples include the Dolomites (north Italy), the High Atlas (Morocco), the Vercors (southeast France) and the Last chance Canyon (New Mexico). The seismic models generally are constructed using the following procedure: (1) construction of a detailed lithological model based on direct outcrop observations; (2) division of the lithological model into lithostratigraphic units, using master bedding planes and important facies transitions as boundaries; (3) assignment of petrophysical properties of these lithostratigraphic units; (4) computation of time sections of reflectivity, using different modeling techniques; and (5) convolution with source wavelets of different frequencies. The lithological detail modeled in the case studies lead to some striking results, particularly the discovery of pseudo-unconformities. Pseudo-unconformities are unconformities in seismics, but correspond to rapid changes of dip and facies in outcrop. None of the outcrop geometries studied were correctly portrayed seismically at 25 Hz frequency. However, in some instances the true relationship would emerge gradually at frequencies of 50 to 100 Hz. These results demonstrate that detailed, outcrop-derived/seismic models can reveal what stratigraphic relationships and features are likely to be resolved under ideal or less ideal conditions, and what pitfalls may befall the interpreter of real seismic data.

  3. Integrated system for seismic evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, J.; Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1989-01-01

    This paper describes the various features of the Seismic Module of the CARES system (Computer Analysis for Rapid Evaluation of Structures). This system was developed by Brookhaven National Laboratory (BNL) for the US Nuclear Regulatory Commission to perform rapid evaluations of structural behavior and capability of nuclear power plant facilities. The CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the features of the Seismic Module in particular. The development of the Seismic Module of the CARES system is based on an approach which incorporates all major aspects of seismic analysis currently employed by the industry into an integrated system that allows for carrying out interactively computations of structural response to seismic motions. The code operates on a PC computer system and has multi-graphics capabilities. It has been designed with user friendly features and it allows for interactive manipulation of various analysis phases during the seismic design process. The capabilities of the seismic module include (a) generation of artificial time histories compatible with given design ground response spectra, (b) development of Power Spectral Density (PSD) functions associated with the seismic input, (c) deconvolution analysis using vertically propagating shear waves through a given soil profile, and (d) development of in-structure response spectra or corresponding PSD's. It should be pointed out that these types of analyses can also be performed individually by using available computer codes such as FLUSH, SAP, etc. The uniqueness of the CARES, however, lies on its ability to perform all required phases of the seismic analysis in an integrated manner. 5 refs., 6 figs.

  4. Vibration reduction in beam bridge under moving loads using nonlinear smooth and discontinuous oscillator

    Directory of Open Access Journals (Sweden)

    Ruilan Tian

    2016-06-01

    Full Text Available The coupled system of smooth and discontinuous absorber and beam bridge under moving loads is constructed in order to detect the effectiveness of smooth and discontinuous absorber. It is worth pointing out that the coupled system contains an irrational restoring force which is a barrier for conventional nonlinear techniques. Hence, the harmonic balance method and Fourier expansion are used to obtain the approximate solutions of the system. The first and the second kind of generalized complete elliptic integrals are introduced. Furthermore, using power flow approach, the performance of smooth and discontinuous absorber in vibration reduction is estimated through the input energy, the dissipated energy, and the damping efficiency. It is interesting that only depending on the value of the smoothness parameter, the efficiency parameter of vibration reduction is optimized. Therefore, smooth and discontinuous absorber can adapt itself to effectively reducing the amplitude of the vibration of the beam bridge, which provides an insight to the understanding of the applications of smooth and discontinuous oscillator in engineering and power flow characteristics in nonlinear system.

  5. Scenario for a Short-Term Probabilistic Seismic Hazard Assessment (PSHA in Chiayi, Taiwan

    Directory of Open Access Journals (Sweden)

    Chung-Han Chan

    2013-01-01

    Full Text Available Using seismic activity and the Meishan earthquake sequence that occurred from 1904 to 1906, a scenario for short-term probabilistic seismic hazards in the Chiayi region of Taiwan is assessed. The long-term earthquake occurrence rate in Taiwan was evaluated using a smoothing kernel. The highest seismicity rate was calculated around the Chiayi region. To consider earthquake interactions, the rate-and-state friction model was introduced to estimate the seismicity rate evolution due to the Coulomb stress change. As imparted by the 1904 Touliu earthquake, stress changes near the 1906 Meishan and Yangshuigang epicenters was higher than the magnitude of tidal triggering. With regard to the impact of the Meishan earthquake, the region close to the Yangshuigang earthquake epicenter had a +0.75 bar stress increase. The results indicated significant interaction between the three damage events. Considering the path and site effect using ground motion prediction equations, a probabilistic seismic hazard in the form of a hazard evolution and a hazard map was assessed. A significant elevation in hazards following the three earthquakes in the sequence was determined. The results illustrate a possible scenario for seismic hazards in the Chiayi region which may take place repeatly in the future. Such scenario provides essential information on earthquake preparation, devastation estimations, emergency sheltering, utility restoration, and structure reconstruction.

  6. Seismic calm predictors of rockburst

    Science.gov (United States)

    Zmushko, Tatjana; Turuntaev, Sergey; Kulikov, Vladimir

    2013-04-01

    The method of "seismic calm" is widely used for forecasting of strong natural earthquakes (Sobolev G.A., Ponomarev A.V., 2003). The "seismic calm" means that during some time period before the main earthquake, the smaller events (with energies of several order smaller than that of the main earthquake) don't occur. In the presented paper the applicability of the method based on the idea of seismic calm for forecasting rockburst is considered. Three deposits (with seismicity induced by mining) are analyzed: Tashtagol iron deposit (Altai, Russia), Vorkuta (North Ural, Russia) and Barentsburg (Spitsbergen, Norway) coalmines. Local seismic monitoring networks are installed on each of them. The catalogues of seismic events were processed and strong events (rockbursts) were studied (Vorkuta M=2,3; Barentsburg M=1,8; Tashtagol M=1,9÷2,2). All catalogues cover at least two years (Vorkuta - 2008-2011, Barentsburg - 2011-2012, Tashtagol - 2002-2012). It was found that the number of seismic events with magnitudes M=0,5÷1 decreased in a month before the main strong event at Vorkuta coalmines. This event was not directly related with coal mining, its epicenter was located aside of the area of coal mining. In Barentsburg mine the rockburst wasn't so strong as in Vorkuta. The number of events with energies M=0,5 decreased slightly before the rockburst, but not so obviously as in Vorkuta case. The seismic events with high energies occur often at Tashtagol iron deposit. Mining methods used there differ from the coal deposit mining. At coalmines the mining combine runs from edge to edge of the wall, cutting off the coal. The considered iron deposit is developed by a method of block blasting. Not all rockbursts occur immediately after the blasting, so, the problem of the rockburst prediction is important for mining safety. To find rockburst precursors it is necessary to separate the events occurred due to the block blasting from the seismic events due to relocation of stresses in

  7. Linearized inversion of multiple scattering seismic energy

    Science.gov (United States)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains

  8. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  9. Advanced Seismic While Drilling System

    Energy Technology Data Exchange (ETDEWEB)

    Robert Radtke; John Fontenot; David Glowka; Robert Stokes; Jeffery Sutherland; Ron Evans; Jim Musser

    2008-06-30

    A breakthrough has been discovered for controlling seismic sources to generate selectable low frequencies. Conventional seismic sources, including sparkers, rotary mechanical, hydraulic, air guns, and explosives, by their very nature produce high-frequencies. This is counter to the need for long signal transmission through rock. The patent pending SeismicPULSER{trademark} methodology has been developed for controlling otherwise high-frequency seismic sources to generate selectable low-frequency peak spectra applicable to many seismic applications. Specifically, we have demonstrated the application of a low-frequency sparker source which can be incorporated into a drill bit for Drill Bit Seismic While Drilling (SWD). To create the methodology of a controllable low-frequency sparker seismic source, it was necessary to learn how to maximize sparker efficiencies to couple to, and transmit through, rock with the study of sparker designs and mechanisms for (a) coupling the sparker-generated gas bubble expansion and contraction to the rock, (b) the effects of fluid properties and dynamics, (c) linear and non-linear acoustics, and (d) imparted force directionality. After extensive seismic modeling, the design of high-efficiency sparkers, laboratory high frequency sparker testing, and field tests were performed at the University of Texas Devine seismic test site. The conclusion of the field test was that extremely high power levels would be required to have the range required for deep, 15,000+ ft, high-temperature, high-pressure (HTHP) wells. Thereafter, more modeling and laboratory testing led to the discovery of a method to control a sparker that could generate low frequencies required for deep wells. The low frequency sparker was successfully tested at the Department of Energy Rocky Mountain Oilfield Test Center (DOE RMOTC) field test site in Casper, Wyoming. An 8-in diameter by 26-ft long SeismicPULSER{trademark} drill string tool was designed and manufactured by TII

  10. Model emulates human smooth pursuit system producing zero-latency target tracking.

    Science.gov (United States)

    Bahill, A T; McDonald, J D

    1983-01-01

    Humans can overcome the 150 ms time delay of the smooth pursuit eye movement system and track smoothly moving visual targets with zero-latency. Our target-selective adaptive control model can also overcome an inherent time delay and produce zero-latency tracking. No other model or man-made system can do this. Our model is physically realizable and physiologically realistic. The technique used in our model should be useful for analyzing other time-delay systems, such as man-machine systems and robots.

  11. Research advances on engineering structural seismic safety of nuclear power plant%核电厂工程结构抗震研究进展

    Institute of Scientific and Technical Information of China (English)

    孔宪京; 林皋

    2013-01-01

      当前以及今后相当长一段时期,核电都将是中国积极发展的能源形式之一,保障核电安全是确保核电工程建设顺利实施和安全运营的关键。然而,中国幅员广阔,地质条件差异大,海域自然条件复杂;同时,中国地震活动范围广、强度大、频度高,基于标准化设计的核电工程结构在建设过程中面临着诸多问题。尤其是2011年日本大地震导致的福岛核电事故的教训,对核电工程的抗震安全提出了新的问题。结合大连理工大学十几年来在解决我国核电工程结构抗震安全中的关键问题,以及在“地震作用下核电厂工程结构的功能失效机理及抗震安全评价”研究中所取得若干进展进行综述性介绍,主要包括核岛地基抗震适应性研究和核岛安全相关工程结构抗震防灾研究。%Nuclear power is one of energy resources that China will vigorously develop for a long term from now on. The issue of nuclear power security guarantee is a key to ensure the smooth implementation and the safe operation of the nuclear power plant construction. However,because of the vast territory of China,the great differences in geological conditions and the complex natural conditions of ocean,as well as a wide range of seis-mic activity,high strength and high frequency of earthquakes in China,nuclear power buildings based on cur-rent standardized design methods are facing problems. Moreover,the lessons of the 2011 Fukushima nuclear ac-cident due to destructive earthquake come out new problems to Chinese seismic safety of nuclear power engineer-ing. In this paper,by combining engineering practice in recent years of nuclear power engineering seismic safety evaluation of the Dalian University of Technology,the key issues and the research methods in the structural seis-mic safety of Chinese nuclear power projects and some progress made by the Dalian University of Technology in the study of

  12. Reevaluation of the Seismicity and seismic hazards of Northeastern Libya

    Science.gov (United States)

    Ben Suleman, abdunnur; Aousetta, Fawzi

    2014-05-01

    Libya, located at the northern margin of the African continent, underwent many episodes of orogenic activities. These episodes of orogenic activities affected and shaped the geological setting of the country. This study represents a detailed investigation that aims to focus on the seismicity and its implications on earthquake hazards of Northeastern Libya. At the end of year 2005 the Libyan National Seismological Network starts functioning with 15 stations. The Seismicity of the area under investigation was reevaluated using data recorded by the recently established network. The Al-Maraj earthquake occurred in May 22nd 2005was analyzed. This earthquake was located in a known seismically active area. This area was the sight of the well known 1963 earthquake that kills over 200 people. Earthquakes were plotted and resulting maps were interpreted and discussed. The level of seismic activity is higher in some areas, such as the city of Al-Maraj. The offshore areas north of Al-Maraj seem to have higher seismic activity. It is highly recommended that the recent earthquake activity is considered in the seismic hazard assessments for the northeastern part of Libya.

  13. Reassessment of probabilistic seismic hazard in the Marmara region

    Science.gov (United States)

    Kalkan, E.; Gulkan, Polat; Yilmaz, N.; Celebi, M.

    2009-01-01

    In 1999, the eastern coastline of the Marmara region (Turkey) witnessed increased seismic activity on the North Anatolian fault (NAF) system with two damaging earthquakes (M 7.4 Kocaeli and M 7.2 D??zce) that occurred almost three months apart. These events have reduced stress on the western segment of the NAF where it continues under the Marmara Sea. The undersea fault segments have been recently explored using bathymetric and reflection surveys. These recent findings helped scientists to understand the seismotectonic environment of the Marmara basin, which has remained a perplexing tectonic domain. On the basis of collected new data, seismic hazard of the Marmara region is reassessed using a probabilistic approach. Two different earthquake source models: (1) the smoothed-gridded seismicity model and (2) fault model and alternate magnitude-frequency relations, Gutenberg-Richter and characteristic, were used with local and imported ground-motion-prediction equations. Regional exposure is computed and quantified on a set of hazard maps that provide peak horizontal ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 sec on uniform firm-rock site condition (760 m=sec average shear wave velocity in the upper 30 m). These acceleration levels were computed for ground motions having 2% and 10% probabilities of exceedance in 50 yr, corresponding to return periods of about 2475 and 475 yr, respectively. The maximum PGA computed (at rock site) is 1.5g along the fault segments of the NAF zone extending into the Marmara Sea. The new maps generally show 10% to 15% increase for PGA, 0.2 and 1.0 sec spectral acceleration values across much of Marmara compared to previous regional hazard maps. Hazard curves and smooth design spectra for three site conditions: rock, soil, and soft-soil are provided for the Istanbul metropolitan area as possible tools in future risk estimates.

  14. Earthquake Activity - SEISMIC_DATA_IN: Seismic Refraction Data for Indiana (Indiana Geological Survey, Point Shapefile)

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — SEISMIC_DATA_IN is a point shapefile created from a shapefile named SEISMIC_DATA, which was derived from a Microsoft Excel spreadsheet named SEISMIC_DECODED. The...

  15. null Seismic Creep, null Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Seismic creep is the constant or periodic movement on a fault as contrasted with the sudden rupture associated with an earthquake. It is a usually slow deformation...

  16. Static behaviour of induced seismicity

    CERN Document Server

    Mignan, Arnaud

    2015-01-01

    The standard paradigm to describe seismicity induced by fluid injection is to apply nonlinear diffusion dynamics in a poroelastic medium. I show that the spatiotemporal behaviour and rate evolution of induced seismicity can, instead, be expressed by geometric operations on a static stress field produced by volume change at depth. I obtain laws similar in form to the ones derived from poroelasticity while requiring a lower description length. Although fluid flow is known to occur in the ground, it is not pertinent to the behaviour of induced seismicity. The proposed model is equivalent to the static stress model for tectonic foreshocks generated by the Non- Critical Precursory Accelerating Seismicity Theory. This study hence verifies the explanatory power of this theory outside of its original scope.

  17. Worldwide Marine Seismic Reflection Profiles

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a large volume of both Analog and Digital seismic reflection data. Currently only a limited number of lines are available online. Digital data include...

  18. Visualization of volumetric seismic data

    Science.gov (United States)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  19. Seismic properties of polyphase rocks

    Science.gov (United States)

    Wang, Qin

    2005-11-01

    Knowledge about the seismic properties of polyphase rocks is fundamental for interpreting seismic refraction and reflection data and for establishing lithospheric structure and composition models. This study aims to obtain more precise relationships between seismic properties of rocks and controlling factors (e.g., pressure, temperature, mineralogical and chemical compositions, microstructure of rocks), particularly for those rocks imprinted by ultrahigh-pressure (UHP) metamorphism. These relationships will be very helpful to extrapolate calculated and measured seismic properties of rocks to depths of interest and to engender interpretations relevant to petrological composition and tectonic process. An Internet Database of Rock Seismic Properties (DRSP) was set up and a Handbook of Seismic Properties of Minerals, Rocks and Ores was published. They comprise almost all data available in the literature during the past 4 decades and can serve as a convenient, comprehensive and concise information source on physical properties of rocks to the earth sciences and geotechnical communities. Statistical results of the DRSP reveal the dependence of seismic properties on density, porosity, humidity, and mineralogical and chemical compositions. Using 16 different averaging methods, we calculated P-wave velocities of 696 dry samples according to the volume fraction and elastic constants of each constituent mineral. Although only 22 common minerals were taken into account in the computation, the calculated P-wave velocities agree well with laboratory values measured at about 300 MPa, where most microcracks are closed and the mean Vp of a polymineralic rock is exclusively controlled by its modal composition. However, none of these mixture rules can simultaneously fit measured P-wave velocities for all lithologies or at all pressures. Therefore, more prudence is required in selecting an appropriate mixture rule for calculation of seismic velocities of different rock types.

  20. Seismic and electrical work at rivers and lakes of Siberia

    Science.gov (United States)

    Seleznev, V. S.; Soloviev, V. M.; Liseikin, A. V.; Sigonin, P.

    2013-05-01

    In West and East Siberia a great deal of rivers and big lakes are situated. For oil and gas exploration these places hold much promise. It is very difficult to carry out seismic work in these regions, when temperature is fall down below 40 degrees centigrade. It is necessary to pave ways for technical equipment, to organize shooting operations in some cases, that harming ecology of investigated regions. It is well-known, that at seas and big reservoirs seismic works are carried out with use of air guns as sources and floating or ground cables as receivers. There is a special interest to carry out jointly processing and interpretation of seismic survey and electrical data. We should learn how to carry out such researches at rivers, developed a special combined technology on river seismic and electrical works carrying out. Geophysical Survey SB RAS has been carried out seismic and electrical works at rivers and reservoirs of Siberia for more then 20 years. We had to work in conditions, when depth of a reservoir was more then 10 meters or less then 1 meter. It was necessary to work out or adapt some floating equipment, to create air-guns working on light depths ("Malysh", "Sibiryak"), to create new recording equipment (seismic and electrical variants of "Baikal" equipment) for carrying out work in such conditions. There are presented the results of seismic researches, carried out in the Lake Baikal, Lake Teletskoe. For the first time it was determined, that the depth of sedimentary cover under Lake Baikal exceeds 14 km. On demands of government and private companies we carried out river works in Common-depth-point method at such rivers as: Ob, Volga, Enisey, Vakh, Lena, Kirenga, Nizhnya Tunguska. Comparison of results got at river profiles with surface ones, crossing the river, showed in difficult surface conditions (central part of the River Lena, the Nizhnya Tunguska) river seismic sections are better then surface sections. It is connected with the fact, that

  1. Smoothing methods in biometry: a historic review

    Directory of Open Access Journals (Sweden)

    Schimek, Michael G.

    2005-06-01

    Full Text Available In Germany around 25 years ago nonparametric smoothing methods have found their way into statistics and with some delay also into biometry. In the early 1980's there has been what one might call a boom in theoretical and soon after also in computational statistics. The focus was on univariate nonparametric methods for density and curve estimation. For biometry however smoothing methods became really interesting in their multivariate version. This 'change of dimensionality' is still raising open methodological questions. No wonder that the simplifying paradigm of additive regression, realized in the generalized additive models (GAM, has initiated the success story of smoothing techniques starting in the early 1990's. In parallel there have been new algorithms and important software developments, primarily in the statistical programming languages S and R. Recent developments of smoothing techniques can be found in survival analysis, longitudinal analysis, mixed models and functional data analysis, partly integrating Bayesian concepts. All new are smoothing related statistical methods in bioinformatics. In this article we aim not only at a general historical overview but also try to sketch activities in the German-speaking world. Moreover, the current situation is critically examined. Finally a large number of relevant references is given.

  2. Newberry Seismic Deployment Fieldwork Report

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J; Templeton, D C

    2012-03-21

    This report summarizes the seismic deployment of Lawrence Livermore National Laboratory (LLNL) Geotech GS-13 short-period seismometers at the Newberry Enhanced Geothermal System (EGS) Demonstration site located in Central Oregon. This Department of Energy (DOE) demonstration project is managed by AltaRock Energy Inc. AltaRock Energy had previously deployed Geospace GS-11D geophones at the Newberry EGS Demonstration site, however the quality of the seismic data was somewhat low. The purpose of the LLNL deployment was to install more sensitive sensors which would record higher quality seismic data for use in future seismic studies, such as ambient noise correlation, matched field processing earthquake detection studies, and general EGS microearthquake studies. For the LLNL deployment, seven three-component seismic stations were installed around the proposed AltaRock Energy stimulation well. The LLNL seismic sensors were connected to AltaRock Energy Gueralp CMG-DM24 digitizers, which are powered by AltaRock Energy solar panels and batteries. The deployment took four days in two phases. In phase I, the sites were identified, a cavity approximately 3 feet deep was dug and a flat concrete pad oriented to true North was made for each site. In phase II, we installed three single component GS-13 seismometers at each site, quality controlled the data to ensure that each station was recording data properly, and filled in each cavity with native soil.

  3. seismicity and seismotectonics of Libya

    Science.gov (United States)

    Ben Suleman, abdunnur

    2015-04-01

    Libya, located at the central Mediterranean margin of the African shield, underwent many episodes of orogenic activity that shaped its geological setting. The present day deformation of Libya is the result of the Eurasia-Africa continental collision. The tectonic evolution of Libya has yielded a complex crustal structure that is composed of a series of basins and uplifts. This study aims to explain in detail the seismicity and seismotectonics of Libya using new data recorded by the recently established Libyan National Seismograph Network (LNSN) incorporating other available geophysical and geological information. Detailed investigations of the Libyan seismicity indicates that Libya has experienced earthquakes of varying magnitudes The seismic activity of Libya shows dominant trends of Seismicity with most of the seismic activity concentrated along the northern coastal areas. Four major clusters of Seismicity were quit noticeable. Fault plane solution was estimated for 20 earthquakes recorded by the Libyan National Seismograph Network in northwestern and northeastern Libya. Results of fault plane solution suggest that normal faulting was dominant in the westernmost part of Libya; strike slip faulting was dominant in northern-central part of Libya. The northern-eastern part of the country suggests that dip-dip faulting were more prevalent.

  4. Seismic stratigraphy of the Bahamas

    Energy Technology Data Exchange (ETDEWEB)

    Ladd, J.W.; Sheridan, R.E.

    1987-06-01

    Seismic reflection profiles from the Straits of Florida, Northwest Providence Channel, Tongue of the Ocean, and Exuma Sound reveal a seismic stratigraphy characterized by a series of prograding Upper Cretaceous and Tertiary seismic sequences with seismic velocities generally less than 4 km/sec overlying a Lower Cretaceous section of low-amplitude reflections which are more nearly horizontal than the overlying prograding clinoforms and have seismic velocities greater than 5 km/sec. The prograding units are detrital shallow-water carbonates shed from nearby carbonate banks into deep intrabank basins that were established in the Late Cretaceous. The Lower Cretaceous units are probably shallow-water carbonate banks that were drowned in the middle Cretaceous but which, during the Early Cretaceous, extended from Florida throughout the Bahamas region. The seismic reflection profiles reveal a sharp angular unconformity at 5-sec two-way traveltime in northwest Tongue of the Ocean, suggesting a rift-drift unconformity and deposition on thinned continental crust. No such unconformity is seen in central and southeast Tongue of the Ocean or in Exuma Sound, suggesting that these areas are built on oceanic crust.

  5. Seismic risk perception in Italy

    Science.gov (United States)

    Crescimbene, Massimo; La Longa, Federica; Camassi, Romano; Pino, Nicola Alessandro; Peruzza, Laura

    2014-05-01

    Risk perception is a fundamental element in the definition and the adoption of preventive counter-measures. In order to develop effective information and risk communication strategies, the perception of risks and the influencing factors should be known. This paper presents results of a survey on seismic risk perception in Italy conducted from January 2013 to present . The research design combines a psychometric and a cultural theoretic approach. More than 7,000 on-line tests have been compiled. The data collected show that in Italy seismic risk perception is strongly underestimated; 86 on 100 Italian citizens, living in the most dangerous zone (namely Zone 1), do not have a correct perception of seismic hazard. From these observations we deem that extremely urgent measures are required in Italy to reach an effective way to communicate seismic risk. Finally, the research presents a comparison between groups on seismic risk perception: a group involved in campaigns of information and education on seismic risk and a control group.

  6. Seismicity of the Jalisco Block

    Science.gov (United States)

    Nunez-Cornu, F. J.; Rutz, M.; Camarena-Garcia, M.; Trejo-Gomez, E.; Reyes-Davila, G.; Suarez-Plascencia, C.

    2002-12-01

    In April 2002 began to transmit the stations of the first phase of Jalisco Telemetric Network located at the northwest of Jalisco Block and at the area of Volcan de Fuego (Colima Volcano), in June were deployed four additional MarsLite portable stations in the Bahia de Banderas area, and by the end of August one more portable station at Ceboruco Volcano. The data of these stations jointly with the data from RESCO (Colima Telemetric Network) give us the minimum seismic stations coverage to initiate in a systematic and permanent way the study of the seismicity in this very complex tectonic region. A preliminary analysis of seismicity based on the events registered by the networks using a shutter algorithm, confirms several important features proposed by microseismicity studies carried out between 1996 and 1998. A high level of seismicity inside and below of Rivera plate is observed, this fact suggest a very complex stress pattern acting on this plate. Shallow seismicity at south and east of Bahia de Banderas also suggest a complex stress pattern in this region of the Jalisco Block, events at more than 30 km depth are located under the mouth of the bay and in face of it, a feature denominated Banderas Boundary mark the change of the seismic regime at north of this latitude (20.75°N), however some shallow events were located at the region of Nayarit.

  7. 5 years of continuous seismic monitoring of snowmelt cycles in a Pyrenean valley

    Science.gov (United States)

    Diaz, Jordi; Sánchez-Pastor, Pilar; Gallart, Josep

    2016-04-01

    In recent years the analysis of background seismic noise variations in the proximity of river channels has revealed as a useful tool to monitor river flow, even for modest discharges. We will focus here in the application of this methodology to study the snowmelt cycle in an Pyrenean valley during the last 5 years, using data from the seismic geophysical station located inside the Canfranc Underground Laboratory (Central Pyrenees). Diaz et al. (2014) first identified in the seismic data the signature of river flow increases associated to snowmelt episodes in the catchment area of the Aragon River, based on the marked correlation between the seismic energy variations in the 2-8 Hz frequency band and the estimated variations in water resources from snowfall. The analysis of seismic data during the snowmelt periods allows to identify a clear 24h cycle, with energy increasing from about 14:00 GMT, remaining at a relatively high level for 12 hours and then smoothly vanishing. The spectrogram reveals richer information, as clear variations in the frequency content can be detected during the time intervals in which the amplitude of the seismic signal remains constant. The data available so far allow to compare the evolution of snowmelt in five seasons with very different hydrological behavior. The 2011 and 2012 seasons have been dry, with snow volumes 30-50 % beneath the average values, while the 2013, 2014 and in particular the 2015 seasons have been largely above the mean. Those variations are reflected in the seismic data, which allow to monitor the time occurrence of the main snowmelt stages for each season and to estimate the intensity of the different snowmelt episodes. Therefore, seismic data can be useful for long term monitoring of snowmelt in Alpine-style mountains.

  8. Smooth muscle actin and myosin expression in cultured airway smooth muscle cells.

    Science.gov (United States)

    Wong, J Z; Woodcock-Mitchell, J; Mitchell, J; Rippetoe, P; White, S; Absher, M; Baldor, L; Evans, J; McHugh, K M; Low, R B

    1998-05-01

    In this study, the expression of smooth muscle actin and myosin was examined in cultures of rat tracheal smooth muscle cells. Protein and mRNA analyses demonstrated that these cells express alpha- and gamma-smooth muscle actin and smooth muscle myosin and nonmuscle myosin-B heavy chains. The expression of the smooth muscle specific actin and myosin isoforms was regulated in the same direction when growth conditions were changed. Thus, at confluency in 1 or 10% serum-containing medium as well as for low-density cells (50-60% confluent) deprived of serum, the expression of the smooth muscle forms of actin and myosin was relatively high. Conversely, in rapidly proliferating cultures at low density in 10% serum, smooth muscle contractile protein expression was low. The expression of nonmuscle myosin-B mRNA and protein was more stable and was upregulated only to a small degree in growing cells. Our results provide new insight into the molecular basis of differentiation and contractile function in airway smooth muscle cells.

  9. Directional bilateral filters for smoothing fluorescence microscopy images

    Directory of Open Access Journals (Sweden)

    Manasij Venkatesh

    2015-08-01

    Full Text Available Images obtained through fluorescence microscopy at low numerical aperture (NA are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR. We also show quantitative improvements in low NA images of F-actin filaments.

  10. Seismic pattern treatment method through calculation of seismic density at grid nodes

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Analysis of seismic data and seismicity characteristics in China, we gave a method to deal with seismic patterns by calculating density at grid nodes. Number of earthquakes and epicenter distribution are considered comprehen-sively in this method. Effect of datum accuracy is stressed on parameter confirmation. Seismic patterns from this method are stable and can reflect seismic characteristics reliably. These seismic patterns are the base of quantita-tive analysis of seismicity. It can be applied in seismic tendency analysis and medium-long term earthquake pre-diction, earthquake countermeasure and risk mitigation.

  11. Feature-preserving surface mesh smoothing via suboptimal Delaunay triangulation.

    Science.gov (United States)

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2013-01-01

    A method of triangular surface mesh smoothing is presented to improve angle quality by extending the original optimal Delaunay triangulation (ODT) to surface meshes. The mesh quality is improved by solving a quadratic optimization problem that minimizes the approximated interpolation error between a parabolic function and its piecewise linear interpolation defined on the mesh. A suboptimal problem is derived to guarantee a unique, analytic solution that is significantly faster with little loss in accuracy as compared to the optimal one. In addition to the quality-improving capability, the proposed method has been adapted to remove noise while faithfully preserving sharp features such as edges and corners of a mesh. Numerous experiments are included to demonstrate the performance of the method.

  12. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation.

    Science.gov (United States)

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-12-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing "bad" triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method.

  13. Distribution Estimation with Smoothed Auxiliary Information

    Institute of Scientific and Technical Information of China (English)

    Xu Liu; Ahmad Ishfaq

    2011-01-01

    Distribution estimation is very important in order to make statistical inference for parameters or its functions based on this distribution. In this work we propose an estimator of the distribution of some variable with non-smooth auxiliary information, for example, a symmetric distribution of this variable. A smoothing technique is employed to handle the non-differentiable function. Hence, a distribution can be estimated based on smoothed auxiliary information. Asymptotic properties of the distribution estimator are derived and analyzed.The distribution estimators based on our method are found to be significantly efficient than the corresponding estimators without these auxiliary information. Some simulation studies are conducted to illustrate the finite sample performance of the proposed estimators.

  14. Archetypal oscillator for smooth and discontinuous dynamics.

    Science.gov (United States)

    Cao, Qingjie; Wiercigroch, Marian; Pavlovskaia, Ekaterina E; Grebogi, Celso; Thompson, J Michael T

    2006-10-01

    We propose an archetypal system to investigate transitions from smooth to discontinuous dynamics. In the smooth regime, the system bears significant similarities to the Duffing oscillator, exhibiting the standard dynamics governed by the hyperbolic structure associated with the stationary state of the double well. At the discontinuous limit, however, there is a substantial departure in the dynamics from the standard one. In particular, the velocity flow suffers a jump in crossing from one well to another, caused by the loss of local hyperbolicity due to the collapse of the stable and unstable manifolds of the stationary state. In the presence of damping and external excitation, the system has coexisting attractors and also a chaotic saddle which becomes a chaotic attractor when a smoothness parameter drops to zero. This attractor can bifurcate to a high-period periodic attractor or a chaotic sea with islands of quasiperiodic attractors depending on the strength of damping.

  15. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  16. High-resolution seismic profiling: development of acquisition, processing, and interpretation for the practical implementation of the method in shallow sub-surface exploration and engineering

    NARCIS (Netherlands)

    Brouwer, J.

    1988-01-01

    In the last few years there has been a general increase in the activities in the field of high-resolution seismic profiling. A growing interest in shallow sub-surface exploration probably underlies this development. Major attention is paid to the adaptation of highresolution seismic profiling for en

  17. High-resolution seismic profiling : development of acquisition, processing, and interpretation for the practical implementation of the method in shallow sub-surface exploration and engineering

    NARCIS (Netherlands)

    Brouwer, J.

    1988-01-01

    In the last few years there has been a general increase in the activities in the field of high-resolution seismic profiling. A growing interest in shallow sub-surface exploration probably underlies this development. Major attention is paid to the adaptation of highresolution seismic profiling for en

  18. An effective quadrilateral mesh adaptation

    Institute of Scientific and Technical Information of China (English)

    KHATTRI Sanjay Kumar

    2006-01-01

    Accuracy of a simulation strongly depends on the grid quality. Here, quality means orthogonality at the boundaries and quasi-orthogonality within the critical regions, smoothness, bounded aspect ratios and solution adaptive behaviour. It is not recommended to refine the parts of the domain where the solution shows little variation. It is desired to concentrate grid points and cells in the part of the domain where the solution shows strong gradients or variations. We present a simple, effective and computationally efficient approach for quadrilateral mesh adaptation. Several numerical examples are presented for supporting our claim.

  19. Romanian Educational Seismic Network Project

    Science.gov (United States)

    Tataru, Dragos; Ionescu, Constantin; Zaharia, Bogdan; Grecu, Bogdan; Tibu, Speranta; Popa, Mihaela; Borleanu, Felix; Toma, Dragos; Brisan, Nicoleta; Georgescu, Emil-Sever; Dobre, Daniela; Dragomir, Claudiu-Sorin

    2013-04-01

    Romania is one of the most active seismic countries in Europe, with more than 500 earthquakes occurring every year. The seismic hazard of Romania is relatively high and thus understanding the earthquake phenomena and their effects at the earth surface represents an important step toward the education of population in earthquake affected regions of the country and aims to raise the awareness about the earthquake risk and possible mitigation actions. In this direction, the first national educational project in the field of seismology has recently started in Romania: the ROmanian EDUcational SEISmic NETwork (ROEDUSEIS-NET) project. It involves four partners: the National Institute for Earth Physics as coordinator, the National Institute for Research and Development in Construction, Urban Planning and Sustainable Spatial Development " URBAN - INCERC" Bucharest, the Babeş-Bolyai University (Faculty of Environmental Sciences and Engineering) and the software firm "BETA Software". The project has many educational, scientific and social goals. The main educational objectives are: training students and teachers in the analysis and interpretation of seismological data, preparing of several comprehensive educational materials, designing and testing didactic activities using informatics and web-oriented tools. The scientific objective is to introduce into schools the use of advanced instruments and experimental methods that are usually restricted to research laboratories, with the main product being the creation of an earthquake waveform archive. Thus a large amount of such data will be used by students and teachers for educational purposes. For the social objectives, the project represents an effective instrument for informing and creating an awareness of the seismic risk, for experimentation into the efficacy of scientific communication, and for an increase in the direct involvement of schools and the general public. A network of nine seismic stations with SEP seismometers

  20. Several methods of smoothing motion capture data

    Science.gov (United States)

    Qi, Jingjing; Miao, Zhenjiang; Wang, Zhifei; Zhang, Shujun

    2011-06-01

    Human motion capture and editing technologies are widely used in computer animation production. We can acquire original motion data by human motion capture system, and then process it by motion editing system. However, noise embed in original motion data maybe introduced by extracting the target, three-dimensional reconstruction process, optimizing algorithm and devices itself in human motion capture system. The motion data must be modified before used to make videos, otherwise the animation figures will be jerky and their behavior is unnatural. Therefore, motion smoothing is essential. In this paper, we compare and summarize three methods of smoothing original motion capture data.

  1. Smooth models for the Coulomb potential

    CERN Document Server

    González-Espinoza, Cristina E; Karwowski, Jacek; Savin, Andreas

    2016-01-01

    Smooth model potentials with parameters selected to reproduce the spectrum of one-electron atoms are used to approximate the singular Coulomb potential. Even when the potentials do not mimic the Coulomb singularity, much of the spectrum is reproduced within the chemical accuracy. For the Hydrogen atom, the smooth approximations to the Coulomb potential are more accurate for higher angular momentum states. The transferability of the model potentials from an attractive interaction (Hydrogen atom) to a repulsive one (Harmonium and the uniform electron gas) is discussed.

  2. Production of super-smooth articles

    Energy Technology Data Exchange (ETDEWEB)

    Duchane, D.V.

    1981-05-29

    Super-smooth rounded or formed articles made of thermoplastic materials including various poly(methyl methacrylate) or acrylonitrile-butadiene-styrene copolymers are produced by immersing the articles into a bath, the composition of which is slowly changed with time. The starting composition of the bath is made up of at least one solvent for the polymer and a diluent made up of at least one nonsolvent for the polymer and optional materials which are soluble in the bath. The resulting extremely smooth articles are useful as mandrels for laser fusion and should be useful for a wide variety of other purposes, for example lenses.

  3. Nonlinear edge: preserving smoothing by PDEs

    Science.gov (United States)

    Ha, Yan; Liu, Jiejing

    2008-12-01

    This work introduces a new algorithm for image smoothing. Nonlinear partial differential equations (PDEs) are employed to smooth the image while preserving the edges and corners. Compared with other filters such as average filter and median filter, it is found that the effects of image denoising by the new algorithm are better than that by other filters. The experimental results show that this method can not only remove the noise but also preserve the edges and corners. Due to its simplicity and efficiency, the algorithm becomes extremely attractive.

  4. Bernstein-type approximations of smooth functions

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available The Bernstein-type approximation for smooth functions is proposed and studied. We propose the Bernstein-type approximation with definitions that directly apply the binomial distribution and the multivariate binomial distribution. The Bernstein-type approximations generalize the corresponding Bernstein polynomials, by considering definitions that depend on a convenient approximation coefficient in linear kernels. In the Bernstein-type approximations, we study the uniform convergence and the degree of approximation. The Bernstein-type estimators of smooth functions of population means are also proposed and studied.

  5. Resolution enhancement of non-stationary seismic data using amplitude-frequency partition

    Science.gov (United States)

    Xie, Yujiang; Liu, Gao

    2015-02-01

    As the Earth's inhomogeneous and viscoelastic properties, seismic signal attenuation we are trying to mitigate is a long-standing problem facing with high-resolution techniques. For addressing such a problem in the fields of time-frequency transform, Gabor transform methods such as atom-window method (AWM) and molecular window method (MWM) have been reported recently. However, we observed that these methods might be much better if we partition the non-stationary seismic data into adaptive stationary segments based on the amplitude and frequency information of the seismic signal. In this study, we present a new method called amplitude-frequency partition (AFP) to implement this process in the time-frequency domain. Cases of a synthetic and field seismic data indicated that the AFP method could partition the non-stationary seismic data into stationary segments approximately, and significantly, a high-resolution result would be achieved by combining the AFP method with conventional spectral-whitening method, which could be considered superior to previous resolution-enhancement methods like time-variant spectral whitening method, the AWM and the MWM as well. This AFP method presented in this study would be an effective resolution-enhancement tool for the non-stationary seismic data in the fields of an adaptive time-frequency transform.

  6. A high-resolution ambient seismic noise model for Europe

    Science.gov (United States)

    Kraft, Toni

    2014-05-01

    measurement precision (i.e. earthquake location), while considering this extremely complex boundary condition. To solve this problem I have developed a high-resolution ambient seismic noise model for Europe. The model is based on land-use data derived from satellite imagery by the EU-project CORINE in a resolution of 100x100m. The the CORINE data consists of several land-use classes, which, besides others, contain: industrial areas, mines, urban fabric, agricultural areas, permanent corps, forests and open spaces. Additionally, open GIS data for highways, and major and minor roads and railway lines were included from the OpenStreetMap project (www.openstreetmap.org). This data was divided into three classes that represent good, intermediate and bad ambient conditions of the corresponding land-use class based on expert judgment. To account for noise propagation away from its source a smoothing operator was applied to individual land-use noise-fields. Finally, the noise-fields were stacked to obtain an European map of ambient noise conditions. A calibration of this map with data of existing seismic stations Europe allowed me to estimate the expected noise level in actual ground motion units for the three ambient noise condition classes of the map. The result is a high-resolution ambient seismic noise map, that allows the network designer to make educated predictions on the expected noise level for arbitrary location in Europe. The ambient noise model was successfully tested in several network optimization projects in Switzerland and surrounding countries and will hopefully be a valuable contribution to improving the data quality of microseismic monitoring networks in Europe.

  7. A Preliminary Feasibility Study On Seismic Monitoring Of Polymer Flooding

    Science.gov (United States)

    Nguyen, P. K.; Park, C.; Lim, B.; Nam, M.

    2012-12-01

    Polymer flooding using water with soluble polymers is an enhanced oil recovery technique, which intends to maximize oil-recovery sweep efficiency by minimizing fingering effects and as a result creating a smooth flood front; polymer flooding decreases the flow rates within high permeability zone while enhances those of lower permeabilities. Understanding of fluid fronts and saturations is critical to not only optimizing polymer flooding but also monitoring the efficiency. Polymer flooding monitoring can be made in single well scale with high-resolution wireline logging, in inter-well scale with tomography, and in reservoir scale with surface survey. For reservoir scale monitoring, this study makes a preliminary feasibility study based on constructing rock physics models (RPMs), which can bridge variations in reservoir parameters to the changes in seismic responses. For constructing RPMs, we change reservoir parameters with consideration of polymer flooding to a reservoir. Time-lapse seismic data for corresponding RPMs are simulated using a time-domain staggered-finite-difference modeling with implementation of a boundary condition of conventional perfect match layer. Analysis on time-lapse seismic data with respect to the changes in fluid front and saturation can give an insight on feasibility of surface seismic survey to polymer flooding. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No. 2011-0014684).

  8. Multi-scale probabilistic seismic imaging with the USArray

    Science.gov (United States)

    Olugboji, T. M.; Lekic, V.; Burdick, S.; Gao, C.

    2016-12-01

    Seismological imaging of the structure of Earth's interior is essential to our understanding of the dynamics and evolution of our planet. Although some fundamental challenges in this imaging problem exist, e.g. lack of stations in the oceans and uneven earthquake distribution, other challenges can now be addressed by the emergence of high performance computing capabilities. These include the assumptions made a-priori about the parameterization and explicit regularization - damping and smoothing - of the Earth model, the inadequate accounting for observational and modeling uncertainty, and the subjectivity often imposed when deciding on the manner in which to combine seismic data with varying sensitivity to different properties in the earth model. In this talk, we present extensions of traditional seismic imaging techniques to crustal and upper mantle structure using a probabilistic (Bayesian) approach. We illustrate various benefits to this approach by analyzing Love and Rayleigh phase velocity and P-wave travel time measurements made using the USArray. We show that the probabilistic approach can: (1) Aid geophysical inference by assessing parameter uncertainty and trade-offs in seismic images (tomograms); (2) Recover multiple scales of heterogeneity by avoiding explicit regularization, even when data coverage is uniform; (3) Yield multi-modal distributions on velocities in regions of rapid velocity variations; and, (4) Quantify improvements in seismic images attributable to new data or new acquisition methods and techniques. We emphasize the central role of high performance computing and the philosophy of open software development and exchange to the success of these techniques, which explore large parameter space and generate large ensembles solutions. Finally, we describe novel approaches to exploring, presenting and understanding the large quantity of information that is contained in the ensemble solutions.

  9. Probabilistic Seismic Hazard Analysis of Injection-Induced Seismicity Utilizing Physics-Based Simulation

    Science.gov (United States)

    Johnson, S.; Foxall, W.; Savy, J. B.; Hutchings, L. J.

    2012-12-01

    Risk associated with induced seismicity is a significant factor in the design, permitting and operation of enhanced geothermal, geological CO2 sequestration, wastewater disposal, and other fluid injection projects. The conventional probabilistic seismic hazard analysis (PSHA) approach provides a framework for estimation of induced seismicity hazard but requires adaptation to address the particular occurrence characteristics of induced earthquakes and to estimation of the ground motions they generate. The assumption often made in conventional PSHA of Poissonian earthquake occurrence in both space and time is clearly violated by seismicity induced by an evolving pore pressure field. Our project focuses on analyzing hazard at the pre-injection design and permitting stage, before an induced earthquake catalog can be recorded. In order to accommodate the commensurate lack of pre-existing data, we have adopted a numerical physics-based approach to synthesizing and estimating earthquake frequency-magnitude distributions. Induced earthquake sequences are generated using the program RSQSIM (Dieterich and Richards-Dinger, PAGEOPH, 2010) augmented to simulate pressure-induced shear failure on faults and fractures embedded in a 3D geological structure under steady-state tectonic shear loading. The model uses available site-specific data on rock properties and in-situ stress, and generic values of frictional properties appropriate to the shallow reservoir depths at which induced events usually occur. The space- and time-evolving pore pressure field is coupled into the simulation from a multi-phase flow model. In addition to potentially damaging ground motions, induced seismicity poses a risk of perceived nuisance in nearby communities caused by relatively frequent, low magnitude earthquakes. Including these shallow local earthquakes in the hazard analysis requires extending the magnitude range considered to as low as M2 and the frequency band to include the short

  10. Controlled-Source Seismic Tomography with Wavelets: Inversion Algorithm and its Application to Vesuvius Volcano

    Science.gov (United States)

    Tikhotsky, S.; Achauer, U.; Fokin, I.

    2009-04-01

    A self-adaptive automated parameterisation approach is suggested for the inversion of controlled-source seismic tomography data. The velocities and interfaces are parameterized by their Haar wavelet expansion coefficients. Only those coefficients that are well constrained by the data, as measured by the number of rays that cross the corresponding wavelet function support area (hit counts) and their angular coverage, are inverted for, others are set to zero. The adequacy of the suggested empirical resolution measures are investigated on the 2D and 3D synthetic examples by the comparision with the corresponding diagonal elements of the resolution matrices. The rule for the optimal selection of algoritm parameters has been constructed. We show with the series of the synthetic tests that our approach leads to the reasonable distribution of resolution throughout the model even in cases of irregular ray coverage and helps to overcome the trade-off between different types of model parameters. The developed algorithm has been used for the construction of the Vesuvius volcano area velocity model based on the TOMOVES experiment data. The described algorithm allows to obtain the multi-resolution model that provide fine structure information in well-sampled areas and a smooth generalized pattern in other parts of the model. Layer-stripping as well as whole-model approaches were applied to the same data set in order to test the stability of the inversion results. Key features of the model (high-velocity body at depth's -1.2 - 1.0 km under the volcano edifice and a low-velocity volcano root in the carbonate basement, low-velocity basins at the volcano flanks and general position of the carbonate basement top at 1-2 km depth) remain stable regardless of the inversion approach used. Our model well agrees with the previous studies particularly in the structure of the upper volcano-sedimentary layer but provides more fine details and reveals additional structures at greater depth's.

  11. Seismic failure modes and seismic safety of Hardfill dam

    Directory of Open Access Journals (Sweden)

    Kun XIONG

    2013-04-01

    Full Text Available Based on microscopic damage theory and the finite element method, and using the Weibull distribution to characterize the random distribution of the mechanical properties of materials, the seismic response of a typical Hardfill dam was analyzed through numerical simulation during the earthquakes with intensities of 8 degrees and even greater. The seismic failure modes and failure mechanism of the dam were explored as well. Numerical results show that the Hardfill dam remains at a low stress level and undamaged or slightly damaged during an earthquake with an intensity of 8 degrees. During overload earthquakes, tensile cracks occur at the dam surfaces and extend to inside the dam body, and the upstream dam body experiences more serious damage than the downstream dam body. Therefore, under the seismic conditions, the failure pattern of the Hardfill dam is the tensile fracture of the upstream regions and the dam toe. Compared with traditional gravity dams, Hardfill dams have better seismic performance and greater seismic safety.

  12. National Seismic Network of Georgia

    Science.gov (United States)

    Tumanova, N.; Kakhoberashvili, S.; Omarashvili, V.; Tserodze, M.; Akubardia, D.

    2016-12-01

    Georgia, as a part of the Southern Caucasus, is tectonically active and structurally complex region. It is one of the most active segments of the Alpine-Himalayan collision belt. The deformation and the associated seismicity are due to the continent-continent collision between the Arabian and Eurasian plates. Seismic Monitoring of country and the quality of seismic data is the major tool for the rapid response policy, population safety, basic scientific research and in the end for the sustainable development of the country. National Seismic Network of Georgia has been developing since the end of 19th century. Digital era of the network started from 2003. Recently continuous data streams from 25 stations acquired and analyzed in the real time. Data is combined to calculate rapid location and magnitude for the earthquake. Information for the bigger events (Ml>=3.5) is simultaneously transferred to the website of the monitoring center and to the related governmental agencies. To improve rapid earthquake location and magnitude estimation the seismic network was enhanced by installing additional 7 new stations. Each new station is equipped with coupled Broadband and Strong Motion seismometers and permanent GPS system as well. To select the sites for the 7 new base stations, we used standard network optimization techniques. To choose the optimal sites for new stations we've taken into account geometry of the existed seismic network, topographic conditions of the site. For each site we studied local geology (Vs30 was mandatory for each site), local noise level and seismic vault construction parameters. Due to the country elevation, stations were installed in the high mountains, no accessible in winter due to the heavy snow conditions. To secure online data transmission we used satellite data transmission as well as cell data network coverage from the different local companies. As a result we've already have the improved earthquake location and event magnitudes. We

  13. Full Waveform Inversion Using Nonlinearly Smoothed Wavefields

    KAUST Repository

    Li, Y.

    2017-05-26

    The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.

  14. Hydrodynamics of nearly smooth granular gases.

    Science.gov (United States)

    Goldhirsch, I; Noskowicz, S H; Bar-Lev, O

    2005-11-17

    Hydrodynamic equations of motion for a monodisperse collection of nearly smooth homogeneous spheres have been derived from the corresponding Boltzmann equation, using a Chapman-Enskog expansion around the elastic smooth spheres limit. Because in the smooth limit the rotational degrees of freedom are uncoupled from the translational ones, it turns out that the required hydrodynamic fields include (in addition to the standard density, velocity, and translational granular temperature fields) the (infinite) set of number densities, n(s,r, t), corresponding to the continuum of values of the angular velocities. The Chapman-Enskog expansion was carried out to high (up to 10th) order in a Sonine polynomial expansion by using a novel computer-aided method. One of the consequences of these equations is that the asymptotic spin distribution in the homogeneous cooling state for nearly smooth, nearly elastic spheres, is highly non-Maxwellian. The simple sheared flow possesses a highly non-Maxwellian distribution as well. In the case of wall-bounded shear, it is shown that the angular velocity injected at the boundaries has a finite penetration length.

  15. Smooth structures on Eschenburg spaces: numerical computations

    CERN Document Server

    Butler, Leo T

    2009-01-01

    This paper numerically computes the topological and smooth invariants of Eschenburg spaces with small fourth cohomology group, following Kruggel's determination of the Kreck-Stolz invariants of Eschenburg spaces that satisfy condition C. The GNU GMP arbitrary-precision library is utilised.

  16. Quantitative analysis of arm movement smoothness

    Science.gov (United States)

    Szczesna, Agnieszka; Błaszczyszyn, Monika

    2017-07-01

    The paper deals with the problem of motion data quantitative smoothness analysis. We investigated values of movement unit, fluidity and jerk for healthy and paralyzed arm of patients with hemiparesis after stroke. Patients were performing drinking task. To validate the approach, movement of 24 patients were captured using optical motion capture system.

  17. Autophagic regulation of smooth muscle cell biology

    Science.gov (United States)

    Salabei, Joshua K.; Hill, Bradford G.

    2014-01-01

    Autophagy regulates the metabolism, survival, and function of numerous cell types, including those comprising the cardiovascular system. In the vasculature, changes in autophagy have been documented in atherosclerotic and restenotic lesions and in hypertensive vessels. The biology of vascular smooth muscle cells appears particularly sensitive to changes in the autophagic program. Recent evidence indicates that stimuli or stressors evoked during the course of vascular disease can regulate autophagic activity, resulting in modulation of VSMC phenotype and viability. In particular, certain growth factors and cytokines, oxygen tension, and pharmacological drugs have been shown to trigger autophagy in smooth muscle cells. Importantly, each of these stimuli has a redox component, typically associated with changes in the abundance of reactive oxygen, nitrogen, or lipid species. Collective findings support the hypothesis that autophagy plays a critical role in vascular remodeling by regulating smooth muscle cell phenotype transitions and by influencing the cellular response to stress. In this graphical review, we summarize current knowledge on the role of autophagy in the biology of the smooth muscle cell in (patho)physiology. PMID:25544597

  18. Optimality conditions in smooth nonlinear programming

    NARCIS (Netherlands)

    Still, G.; Streng, M.

    1996-01-01

    This survey is concerned with necessary and sufficient optimality conditions for smooth nonlinear programming problems with inequality and equality constraints. These conditions deal with strict local minimizers of order one and two and with isolated minimizers. In most results, no constraint qualif

  19. Topics in particle filtering and smoothing

    NARCIS (Netherlands)

    Saha, Saikat

    2009-01-01

    Particle filtering/smoothing is a relatively new promising class of algorithms to deal with the estimation problems in nonlinear and/or non- Gaussian systems. Currently, this is a very active area of research and there are many issues that are not either properly addressed or are still open. One of

  20. Autonomic Modification of Intestinal Smooth Muscle Contractility

    Science.gov (United States)

    Montgomery, Laura E. A.; Tansey, Etain A.; Johnson, Chris D.; Roe, Sean M.; Quinn, Joe G.

    2016-01-01

    Intestinal smooth muscle contracts rhythmically in the absence of nerve and hormonal stimulation because of the activity of pacemaker cells between and within the muscle layers. This means that the autonomic nervous system modifies rather than initiates intestinal contractions. The practical described here gives students an opportunity to observe…

  1. Recursive Filtering And Smoothing In Robot Dynamics

    Science.gov (United States)

    Rodriguez, Guillermo

    1992-01-01

    Techniques developed originally for electronic systems also useful for multibody mechanical systems. Report summarizes methods developed to solve nonlinear forward-dynamics problem for robot of multiple-link arms connected by joints. Primary objective to show equivalence between recursive methods of dynamical analysis and some filtering and smoothing techniques from state-estimation theory.

  2. Autophagic regulation of smooth muscle cell biology

    Directory of Open Access Journals (Sweden)

    Joshua K. Salabei

    2015-04-01

    Full Text Available Autophagy regulates the metabolism, survival, and function of numerous cell types, including those comprising the cardiovascular system. In the vasculature, changes in autophagy have been documented in atherosclerotic and restenotic lesions and in hypertensive vessels. The biology of vascular smooth muscle cells appears particularly sensitive to changes in the autophagic program. Recent evidence indicates that stimuli or stressors evoked during the course of vascular disease can regulate autophagic activity, resulting in modulation of VSMC phenotype and viability. In particular, certain growth factors and cytokines, oxygen tension, and pharmacological drugs have been shown to trigger autophagy in smooth muscle cells. Importantly, each of these stimuli has a redox component, typically associated with changes in the abundance of reactive oxygen, nitrogen, or lipid species. Collective findings support the hypothesis that autophagy plays a critical role in vascular remodeling by regulating smooth muscle cell phenotype transitions and by influencing the cellular response to stress. In this graphical review, we summarize current knowledge on the role of autophagy in the biology of the smooth muscle cell in (pathophysiology.

  3. Artistic edge and corner enhancing smoothing

    NARCIS (Netherlands)

    Papari, Giuseppe; Petkov, Nicolai; Campisi, Patrizio

    2007-01-01

    Two important visual properties of paintings and painting-like images are the absence of texture details and the increased sharpness of edges as compared to photographic images. Painting-like artistic effects can be achieved from photographic images by filters that smooth out texture details, while

  4. A study on seismicity and seismic hazard for Karnataka State

    Indian Academy of Sciences (India)

    T G Sitharam; Naveen James; K S Vipin; K Ganesha Raj

    2012-04-01

    This paper presents a detailed study on the seismic pattern of the state of Karnataka and also quantifies the seismic hazard for the entire state. In the present work, historical and instrumental seismicity data for Karnataka (within 300 km from Karnataka political boundary) were compiled and hazard analysis was done based on this data. Geographically, Karnataka forms a part of peninsular India which is tectonically identified as an intraplate region of Indian plate. Due to the convergent movement of the Indian plate with the Eurasian plate, movements are occurring along major intraplate faults resulting in seismic activity of the region and hence the hazard assessment of this region is very important. Apart from referring to seismotectonic atlas for identifying faults and fractures, major lineaments in the study area were also mapped using satellite data. The earthquake events reported by various national and international agencies were collected until 2009. Declustering of earthquake events was done to remove foreshocks and aftershocks. Seismic hazard analysis was done for the state of Karnataka using both deterministic and probabilistic approaches incorporating logic tree methodology. The peak ground acceleration (PGA) at rock level was evaluated for the entire state considering a grid size of 0.05° × 0.05°. The attenuation relations proposed for stable continental shield region were used in evaluating the seismic hazard with appropriate weightage factors. Response spectra at rock level for important Tier II cities and Bangalore were evaluated. The contour maps showing the spatial variation of PGA values at bedrock are presented in this work.

  5. Adaptive passive equivalence of uncertain Lü system

    Institute of Scientific and Technical Information of China (English)

    Qi Dong-Lian

    2006-01-01

    An adaptive passive strategy for controlling uncertain Lü system is proposed. Since the uncertain Lü system is minimum phase and the uncertain parameters are from a bounded compact set, the essential conditions are studied by which uncertain Lü system could be equivalent to a passive system, and the adaptive control law is given. Using passive theory, the uncertain Lü system could be globally asymptotically stabilized at different equilibria by the smooth state feedback.

  6. Seismic behaviour of geotechnical structures

    Directory of Open Access Journals (Sweden)

    F. Vinale

    2002-06-01

    Full Text Available This paper deals with some fundamental considerations regarding the behaviour of geotechnical structures under seismic loading. First a complete definition of the earthquake disaster risk is provided, followed by the importance of performing site-specific hazard analysis. Then some suggestions are provided in regard to adequate assessment of soil parameters, a crucial point to properly analyze the seismic behaviour of geotechnical structures. The core of the paper is centered on a critical review of the analysis methods available for studying geotechnical structures under seismic loadings. All of the available methods can be classified into three main classes, including the pseudo-static, pseudo-dynamic and dynamic approaches, each of which is reviewed for applicability. A more advanced analysis procedure, suitable for a so-called performance-based design approach, is also described in the paper. Finally, the seismic behaviour of the El Infiernillo Dam was investigated. It was shown that coupled elastoplastic dynamic analyses disclose some of the important features of dam behaviour under seismic loading, confirmed by comparing analytical computation and experimental measurements on the dam body during and after a past earthquake.

  7. Role of Smooth Muscle in Intestinal Inflammation

    Directory of Open Access Journals (Sweden)

    Stephen M Collins

    1996-01-01

    Full Text Available The notion that smooth muscle function is altered in inflammation is prompted by clinical observations of altered motility in patients with inflammatory bowel disease (IBD. While altered motility may reflect inflammation-induced changes in intrinsic or extrinsic nerves to the gut, changes in gut hormone release and changes in muscle function, recent studies have provided in vitro evidence of altered muscle contractility in muscle resected from patients with ulcerative colitis or Crohn’s disease. In addition, the observation that smooth muscle cells are more numerous and prominent in the strictured bowel of IBD patients compared with controls suggests that inflammation may alter the growth of intestinal smooth muscle. Thus, inflammation is associated with changes in smooth muscle growth and contractility that, in turn, contribute to important symptoms of IBD including diarrhea (from altered motility and pain (via either altered motility or stricture formation. The involvement of smooth muscle in this context may be as an innocent bystander, where cells and products of the inflammatory process induce alterations in muscle contractility and growth. However, it is likely that intestinal muscle cells play a more active role in the inflammatory process via the elaboration of mediators and trophic factors, including cytokines, and via the production of collagen. The concept of muscle cells as active participants in the intestinal inflammatory process is a new concept that is under intense study. This report summarizes current knowledge as it relates to these two aspects of altered muscle function (growth and contractility in the inflamed intestine, and will focus on mechanisms underlying these changes, based on data obtained from animal models of intestinal inflammation.

  8. Seismic Data Analysis to the Converted Wave Acquisition: A Case Study in Offshore Malaysia

    Science.gov (United States)

    Latiff, A. H. Abdul; Osman, S. A. A.; Jamaludin, S. N. F.

    2016-07-01

    Many fields in offshore Malaysia suffer from the presence of shallow gas cloud which is one of the major issues in the basin. Seismic images underneath the gas cloud often show poor resolution which makes the geophysical and geological interpretation difficult. This effect can be noticed from the amplitude dimming, loss of high-frequency energy, and phase distortion. In this work, the subsurface will be analyzed through the geophysical interpretation of the converted P-S data. This P-S converted dataset was obtained through ocean bottom cable (OBC) procedure which was conducted at a shallow gas affected field located in Malaysian Basin. The geophysical interpretation process begin by picking the clear faults system and horizons, followed by thorough post-stack seismic data processing procedure. Finally, the attributes analyses were implemented to the seismic section in order to image the unseen faults system. The interpreted seismic sections show significant improvement in the seismic images, particularly through median filter process. Moreover, the combination of structural smoothing and variance procedure had contributed to the correct faults location interpretation.

  9. Stochastic seismic tomography by interacting Markov chains

    Science.gov (United States)

    Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe

    2016-10-01

    Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches, they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high-dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case, the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on importance resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non-linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper, the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a simulated annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet-based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.

  10. Stochastic seismic tomography by interacting Markov chains

    Science.gov (United States)

    Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe

    2016-07-01

    Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on Importance Resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a Simulated Annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.

  11. TECHNICAL NOTES SEISMIC SOIL-STRUCTURE INTERACTION ...

    African Journals Online (AJOL)

    dell

    SEISMIC SOIL-STRUCTURE INTERACTION AS A POTENTIAL TOOL FOR. ECONOMICAL SEISMIC ... inherent in the system as in any other material like the superstructure itself. ..... [9] Gazetas, G., “Analysis of Machine. Foundation Vibration: ...

  12. SEG Advances in Rotational Seismic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pierson, Robert; Laughlin, Darren; Brune, Bob

    2016-10-17

    Significant advancements in the development of sensors to enable rotational seismic measurements have been achieved. Prototypes are available now to support experiments that help validate the utility of rotational seismic measurements.

  13. Seismic scanning tunneling macroscope - Theory

    KAUST Repository

    Schuster, Gerard T.

    2012-09-01

    We propose a seismic scanning tunneling macroscope (SSTM) that can detect the presence of sub-wavelength scatterers in the near-field of either the source or the receivers. Analytic formulas for the time reverse mirror (TRM) profile associated with a single scatterer model show that the spatial resolution limit to be, unlike the Abbe limit of λ/2, independent of wavelength and linearly proportional to the source-scatterer separation as long as the point scatterer is in the near-field region; if the sub-wavelength scatterer is a spherical impedance discontinuity then the resolution will also be limited by the radius of the sphere. Therefore, superresolution imaging can be achieved as the scatterer approaches the source. This is analogous to an optical scanning tunneling microscope that has sub-wavelength resolution. Scaled to seismic frequencies, it is theoretically possible to extract 100 Hz information from 20 Hz data by imaging of near-field seismic energy.

  14. The Apollo passive seismic experiment

    Science.gov (United States)

    Latham, G. V.; Dorman, H. J.; Horvath, P.; Ibrahim, A. K.; Koyama, J.; Nakamura, Y.

    1979-01-01

    The completed data set obtained from the 4-station Apollo seismic network includes signals from approximately 11,800 events of various types. Four data sets for use by other investigators, through the NSSDC, are in preparation. Some refinement of the lunar model based on seismic data can be expected, but its gross features remain as presented two years ago. The existence of a small, molten core remains dependent upon the analysis of signals from a single, far-side impact. Analysis of secondary arrivals from other sources may eventually resolve this issue, as well as continued refinement of the magnetic field measurements. Evidence of considerable lateral heterogeneity within the moon continues to build. The mystery of the much meteoroid flux estimate derived from lunar seismic measurements, as compared with earth-based estimates, remains; although, significant correlations between terrestrial and lunar observations are beginning to emerge.

  15. Surface wave tomography of Europe from ambient seismic noise

    Science.gov (United States)

    Lu, Yang; Stehly, Laurent; Paul, Anne

    2017-04-01

    We present a European scale high-resolution 3-D shear wave velocity model derived from ambient seismic noise tomography. In this study, we collect 4 years of continuous seismic recordings from 1293 stations across much of the European region (10˚W-35˚E, 30˚N-75˚N), which yields more than 0.8 million virtual station pairs. This data set compiles records from 67 seismic networks, both permanent and temporary from the EIDA (European Integrated Data Archive). Rayleigh wave group velocity are measured at each station pair using the multiple-filter analysis technique. Group velocity maps are estimated through a linearized tomographic inversion algorithm at period from 5s to 100s. Adaptive parameterization is used to accommodate heterogeneity in data coverage. We then apply a two-step data-driven inversion method to obtain the shear wave velocity model. The two steps refer to a Monte Carlo inversion to build the starting model, followed by a linearized inversion for further improvement. Finally, Moho depth (and its uncertainty) are determined over most of our study region by identifying and analysing sharp velocity discontinuities (and sharpness). The resulting velocity model shows good agreement with main geological features and previous geophyical studies. Moho depth coincides well with that obtained from active seismic experiments. A focus on the Greater Alpine region (covered by the AlpArray seismic network) displays a clear crustal thinning that follows the arcuate shape of the Alps from the southern French Massif Central to southern Germany.

  16. Time-Dependent Seismic Tomography

    Science.gov (United States)

    Julian, B. R.

    2008-12-01

    Temporal changes in seismic wave speeds in the Earth's crust have been measured at several locations, notably The Geysers geothermal area in California, in studies that used three-dimensional seismic tomography. These studies have used conventional tomography methods to invert multiple seismic-wave arrival time data sets independently and assumed that any differences in the derived structures reflect real temporal variations. Such an assumption is dangerous because the results of repeated tomography experiments would differ even if the structure did not change, simply because of variation in the seismic ray distribution caused by the natural variation in earthquake locations. This problem can be severe when changes in the seismicity distribution are systematic, as, for example, at the onset of an aftershock sequence. The sudden change in the ray distribution can produce artifacts that mimic changes in the seismic wave speeds at the time of a large earthquake. Even if the source locations did not change (if only explosion data were used, for example), derived structures would inevitably differ because of observational errors. A better approach to determining what temporal changes are truly required by the data is to invert multiple data sets simultaneously, imposing constraints to minimize differences between the models for different epochs. This problem is similar to that of seeking models similar to some a priori initial assumption, and a method similar to "damped least squares" can solve it. The order of the system of normal equations for inverting data from two epochs is twice as large as that for a single epoch, and solving it by standard methods requires eight times the computational labor. We present an algorithm for reducing this factor to two, so that inverting multiple epochs simultaneously is comparable in difficulty to inverting them independently, and illustrate its performance using synthetic arrival times and observed data from several areas in

  17. Historical Seismicity of Central Panama

    Science.gov (United States)

    Camacho, E.

    2013-05-01

    Central Panama lies in the Panama microplate, neighboring seismically active regions of Costa Rica and Colombia. This region, crossed by the Panama Canal, concentrates most of the population and economic activity of the Republic of Panama. Instrumental observation of earthquakes in Panama began on 1882 by the Compagnie Universelle du Canal Interocéanique de Panama and continued from 1904 to 1977 by the Panama Canal Company. From October 1997 to March 1998 the USGS deployed a temporary digital seismic network. Since 2003 this region is monitored by a digital seismic network operated by the Panama Canal Authority and I complemented by the broad band stations of the University of Panama seismic network. The seismicity in this region is very diffuse and the few events which are recorded have magnitudes less than 3.0. Historical archives and antique newspapers from Spain, Colombia, Panama and the United Sates have been searched looking for historical earthquake information which could provide a better estimate of the seismicity in this region. We find that Panama City has been shaken by two destructive earthquakes in historical times. One by a local fault (i.e. Pedro Miguel fault) on May 2, 1621 (I=Vlll MM), and a subduction event from the North Panama Deformed Belt (NPDB) on September 7, 1882 (I=Vll MM). To test these findings two earthquakes scenarios were generated, using SELENA, for Panama City Old Quarter. Panama City was rebuilt on January 21, 1673, on a rocky point facing the Pacific Ocean after the sack by pirate Morgan on January 28, 1671. The pattern of damage to calicanto (unreinforced colonial masonry) and wood structures for a crustal local event are higher than those for an event from the NPDB and seem to confirm that the city has not been shaken by a major local event since May 2, 1621 and a subduction event since September 7, 1882

  18. A Review of Seismicity in 2008

    Institute of Scientific and Technical Information of China (English)

    Li Gang; Liu Jie; Yu Surong

    2009-01-01

    @@ 1 SURVEY OF GLOBE SEISMICITY IN 2008 A total of 19 strong earthquakes with Ms≥7.0 occurred in the world in 2008 according to the Chinese Seismic Station Network (Table 1 ). The strongest earthquake was the Wenchuan earthquake with Ms8.0 on May 12,2008 (Fig.1). Earthquake frequency was apparently lower and the energy release remarkably attenuated in 2008, compared to seismicity in 2007. The characteristics of seismicity are as follows:

  19. Seismic detection of meteorite impacts on Mars

    OpenAIRE

    Teanby, N.A.; Wookey, J.

    2011-01-01

    Abstract Meteorite impacts provide a potentially important seismic source for probing Mars? interior. It has recently been shown that new craters can be detected from orbit using high resolution imaging, which means the location of any impact-related seismic event could be accurately determined thus improving the constraints that could be placed on internal structure using a single seismic station. This is not true of other seismic sources on Mars such as sub-surface faulting, whic...

  20. Time-lapse seismic within reservoir engineering

    OpenAIRE

    Oldenziel, T.

    2003-01-01

    Time-lapse 3D seismic is a fairly new technology allowing dynamic reservoir characterisation in a true volumetric sense. By investigating the differences between multiple seismic surveys, valuable information about changes in the oil/gas reservoir state can be captured. Its interpretation involves different disciplines, of which the main three are: reservoir management, rock physics, and seismics. The main challenge is expressed as "How to optimally benefit from time-lapse seismic". The chall...

  1. Robustness of timber structures in seismic areas

    OpenAIRE

    Neves, Luís A.C.; Branco, Jorge M.

    2011-01-01

    Major similarities between robustness assessment and seismic design exist, and significant information can be brought from seismic design to robustness design. As will be discussed, although some methods and limitations considered in seismic design can improve robustness, the capacity of the structure to sustain limited damage without disproportionate effects is significantly more complex. In fact, seismic design can either improve or reduce the resistance of structures to unfo...

  2. Adaptive Lighting

    DEFF Research Database (Denmark)

    Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper

    2015-01-01

    Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...

  3. Development of seismic analysis model of LMFBR and seismic time history response analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koo, K. H.; Lee, J. H.; Yoo, B. [KAERI, Taejon (Korea, Republic of)

    2001-05-01

    The main objective of this paper is to develop the seismic analysis model of KALIMER reactor structures including the primary coolant of sodium and to evaulate the seismic responses of the maximum peak acceleration and the relative displacements by the time history seismic response analysis. The seismic time history response analyses were carried out for both cases of the seismic isolation design and the non-isolation one to verify the seismic isolation performance. From the results of seismic response analysis using the developed seismic analysis model, it is clearly verified that the seismic isolation design gives very significantly reduced seismic responses compared with the non-isolation design. All design criteria for the relative displacement repsonse were satisfied for KALIMER reactor structures.

  4. Uncertainty treatment and sensitivity analysis of the European Probabilistic Seismic Hazard Assessment

    Science.gov (United States)

    Woessner, J.; Danciu, L.; Giardini, D.

    2013-12-01

    Probabilistic seismic hazard assessment (PSHA) aims to characterize the best available knowledge on seismic hazard of a study area, ideally taking into account all sources of uncertainty. The EC-FP7 funded project Seismic Hazard Harmonization for Europe (SHARE) generated a time-independent community-based hazard model for the European region for ground motion parameters spanning from spectral ordinates of PGA to 10s and annual exceedance probabilities from one-in-ten to one-in-ten thousand years. The results will serve as reference to define engineering applications within the EuroCode 8 and provide homogeneous input for state-of-the art seismic safety assessment of critical infrastructure. The SHARE model accounts for uncertainties, whether aleatory or epistemic, via a logic tree. Epistemic uncertainties within the seismic source-model are represented by three source models including a traditional area source model, a model that characterizes fault sources, and an approach that uses kernel-smoothing for seismicity and fault source moment release. Activity rates and maximum magnitudes in the source models are treated as aleatory uncertainties. For practical implementation and computational purposes, some of the epistemic uncertainties in the source model (i.e. dip and strike angles) are treated as aleatory, and a mean seismicity model is considered. Epistemic uncertainties for ground motions are considered by multiple Ground Motion Prediction Equations as a function of tectonic settings and treated as being correlated. The final results contain the full distribution of ground motion variability. We show how we used the logic-tree approach to consider the alternative models and how, based on the degree-of-belief in the models, we defined the weights of the single branches. This contribution features results and sensitivity analysis of the entire European hazard model and selected sites.

  5. Assessing Induced Seismicity Models for Use in Deep Geothermal Energy Projects

    Science.gov (United States)

    Király, E.; Zechar, J. D.; Gischig, V.; Karvounis, D.; Wiemer, S.

    2014-12-01

    The decision to phase out nuclear power in Switzerland by 2034 accelerated research on deep geothermal energy, which has the ability to contribute to long-term energy resources. Induced seismicty is a nessesary tool to create an enhanced geothermal system; however, potential seismic hazard poses a major challange to the widespread implementation of this technology. Monitoring and controlling induced seismicity with warning systems requires models that are updated as new data arrive and that are cast in probabilistic terms. Our main question is: is it possible to forecast the seismic response of the geothermal site during and after stimulation with models based on observed seismicity and hydraulic data? To answer the question, we explore the predictive performance of various stochastic and hybrid models. The goal is to find the most suitable model or model combination for forecasting induced microseismicity and unexpected events in geothermal reservoirs.In this study, we consider the Basel 2006 dataset and generate forecasts of the number and spatial distribution of seismicity in the next six hours. We explore two models: (1) a hydro-geomechanical stochastic seed model based on pore pressure diffusion with irreversible permeability enhancement; and (2) four variants of a 3D "Shapiro" model which combine estimates of seismogenic index with a spatial forecast based on kernel-smoothed seismicity and temporal weighting. For both models, hydraulic and seismic parameters are calibrated against data from a learning period (starting at the beginning of injection) every six hours. We assess the models using metrics developed by the Collaboratory for the Study of Earthquake Predictability: we check the overall consistency of forecasts with the observations by comparing the number, magnitude and spatial distribution of forecast events with the observed induced earthquakes. We also compare the models with each other in terms of information gain, allowing pairwise ranking.

  6. Pre-failure behaviour of an unstable limestone cliff from displacement and seismic data

    Directory of Open Access Journals (Sweden)

    J.-L. Got

    2010-04-01

    Full Text Available We monitored the displacement and seismic activity of an unstable vertical rock slice in a natural limestone cliff of the southeast Vercors massif, southeast France, during the months preceding its collapse. Displacement measurements showed an average acceleration of the movement of its top, with clear increases in the displacement velocity and in the discrete seismic event production rate during periods where temperature falls, with more activity when rainfall or frost occurs. Crises of discrete seismic events produce high amplitudes in periodograms, but do not change the high frequency base noise level rate. We infer that these crises express the critical crack growth induced by water weakening (from water vapor condensation or rain of the rock strength rather than to a rapid change in applied stresses. Seismic noise analysis showed a steady increase in the high frequency base noise level and the emergence of spectral modes in the signal recorded by the sensor installed on the unstable rock slice during the weeks preceding the collapse. High frequency seismic noise base level seems to represent subcritical crack growth. It is a smooth and robust parameter whose variations are related to generalized changes in the rupture process. Drop of the seismic noise amplitude was concomitant with the emergence of spectral modes – that are compatible with high-order eigenmodes of the unstable rock slice – during the later stages of its instability. Seismic noise analysis, especially high frequency base noise level analysis may complement that of inverse displacement velocity in early-warning approaches when strong displacement fluctuations occur.

  7. Advanced Seismic While Drilling System

    Energy Technology Data Exchange (ETDEWEB)

    Robert Radtke; John Fontenot; David Glowka; Robert Stokes; Jeffery Sutherland; Ron Evans; Jim Musser

    2008-06-30

    A breakthrough has been discovered for controlling seismic sources to generate selectable low frequencies. Conventional seismic sources, including sparkers, rotary mechanical, hydraulic, air guns, and explosives, by their very nature produce high-frequencies. This is counter to the need for long signal transmission through rock. The patent pending SeismicPULSER{trademark} methodology has been developed for controlling otherwise high-frequency seismic sources to generate selectable low-frequency peak spectra applicable to many seismic applications. Specifically, we have demonstrated the application of a low-frequency sparker source which can be incorporated into a drill bit for Drill Bit Seismic While Drilling (SWD). To create the methodology of a controllable low-frequency sparker seismic source, it was necessary to learn how to maximize sparker efficiencies to couple to, and transmit through, rock with the study of sparker designs and mechanisms for (a) coupling the sparker-generated gas bubble expansion and contraction to the rock, (b) the effects of fluid properties and dynamics, (c) linear and non-linear acoustics, and (d) imparted force directionality. After extensive seismic modeling, the design of high-efficiency sparkers, laboratory high frequency sparker testing, and field tests were performed at the University of Texas Devine seismic test site. The conclusion of the field test was that extremely high power levels would be required to have the range required for deep, 15,000+ ft, high-temperature, high-pressure (HTHP) wells. Thereafter, more modeling and laboratory testing led to the discovery of a method to control a sparker that could generate low frequencies required for deep wells. The low frequency sparker was successfully tested at the Department of Energy Rocky Mountain Oilfield Test Center (DOE RMOTC) field test site in Casper, Wyoming. An 8-in diameter by 26-ft long SeismicPULSER{trademark} drill string tool was designed and manufactured by TII

  8. Advancing New 3D Seismic Interpretation Methods for Exploration and Development of Fractured Tight Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    James Reeves

    2005-01-31

    In a study funded by the U.S. Department of Energy and GeoSpectrum, Inc., new P-wave 3D seismic interpretation methods to characterize fractured gas reservoirs are developed. A data driven exploratory approach is used to determine empirical relationships for reservoir properties. Fractures are predicted using seismic lineament mapping through a series of horizon and time slices in the reservoir zone. A seismic lineament is a linear feature seen in a slice through the seismic volume that has negligible vertical offset. We interpret that in regions of high seismic lineament density there is a greater likelihood of fractured reservoir. Seismic AVO attributes are developed to map brittle reservoir rock (low clay) and gas content. Brittle rocks are interpreted to be more fractured when seismic lineaments are present. The most important attribute developed in this study is the gas sensitive phase gradient (a new AVO attribute), as reservoir fractures may provide a plumbing system for both water and gas. Success is obtained when economic gas and oil discoveries are found. In a gas field previously plagued with poor drilling results, four new wells were spotted using the new methodology and recently drilled. The wells have estimated best of 12-months production indicators of 2106, 1652, 941, and 227 MCFGPD. The latter well was drilled in a region of swarming seismic lineaments but has poor gas sensitive phase gradient (AVO) and clay volume attributes. GeoSpectrum advised the unit operators that this location did not appear to have significant Lower Dakota gas before the well was drilled. The other three wells are considered good wells in this part of the basin and among the best wells in the area. These new drilling results have nearly doubled the gas production and the value of the field. The interpretation method is ready for commercialization and gas exploration and development. The new technology is adaptable to conventional lower cost 3D seismic surveys.

  9. Seismic monitoring of soft-rock landslides: New case study at Pechgraben mudslide - Upper Austria

    Science.gov (United States)

    Vouillamoz, Naomi; Santoyo, Juan Carlos; Ottowitz, David; Jochum, Birgit; Pfeiler, Stefan; Supper, Robert; Joswig, Manfred

    2016-04-01

    Creeping soft-rock landslides trigger various seismic signals which relate to key dynamics of the slope instability. A new seismic monitoring study is carried out at Pechgraben - Upper Austria, where a clay-shale rich mudslide was reactivated in summer 2013 after heavy rainfalls. The well geophysical instrumentation of the Pechgraben mudslide by the Geological Survey of Austria (LAMOND network including permanent ERT, GPS, piezometers, soil temperature/humidity and photomonitoring) is expected as a better basis for joint interpretation of seismic source processes. Seismic data are acquired by small-aperture (< 30 m) sparse seismic arrays. Potential events are recognized by frequency-time signatures in sonograms, where sonograms are spectrograms featuring a frequency-dependant noise adaptation that enhance the display of weak signal energy down to the noise threshold. Further signal evaluation follows an interactive scheme where semi-automated beam forming method enables for approximate source location. Three seismic arrays where deployed at Pechgraben in October 2015 for an eight days feasibility study. About 200 seismic signals potentially triggered by the landslide were manually picked on night-time measurements. Target signals occur in tremor-like sequences and have duration within 1 - 8 seconds. Local magnitudes are calibrated down to ML -1.5 (Wood-Anderson amplitude ≈ 0.1 μm in 100 m distance). Observed waveforms display high degree of similarity with seismic signals catalogued at other soft-rock landslides suggesting that a general typology of seismic source processes could be established for creeping soft-rock instabilities with potential further implications in landslide mitigation and forecasting.

  10. Community Seismic Network (CSN)

    Science.gov (United States)

    Clayton, R. W.; Heaton, T. H.; Kohler, M. D.; Cheng, M.; Guy, R.; Chandy, M.; Krause, A.; Bunn, J.; Olson, M.; Faulkner, M.; Liu, A.; Strand, L.

    2012-12-01

    We report on developments in sensor connectivity, architecture, and data fusion algorithms executed in Cloud computing systems in the Community Seismic Network (CSN), a network of low-cost sensors housed in homes and offices by volunteers in the Pasadena, CA area. The network has over 200 sensors continuously reporting anomalies in local acceleration through the Internet to a Cloud computing service (the Google App Engine) that continually fuses sensor data to rapidly detect shaking from earthquakes. The Cloud computing system consists of data centers geographically distributed across the continent and is likely to be resilient even during earthquakes and other local disasters. The region of Southern California is partitioned in a multi-grid style into sets of telescoping cells called geocells. Data streams from sensors within a geocell are fused to detect anomalous shaking across the geocell. Temporal spatial patterns across geocells are used to detect anomalies across regions. The challenge is to detect earthquakes rapidly with an extremely low false positive rate. We report on two data fusion algorithms, one that tessellates the surface so as to fuse data from a large region around Pasadena and the other, which uses a standard tessellation of equal-sized cells. Since September 2011, the network has successfully detected earthquakes of magnitude 2.5 or higher within 40 Km of Pasadena. In addition to the standard USB device, which connects to the host's computer, we have developed a stand-alone sensor that directly connects to the internet via Ethernet or wifi. This bypasses security concerns that some companies have with the USB-connected devices, and allows for 24/7 monitoring at sites that would otherwise shut down their computers after working hours. In buildings we use the sensors to model the behavior of the structures during weak events in order to understand how they will perform during strong events. Visualization models of instrumented buildings ranging

  11. Seismic Wave Propagation on the Tablet Computer

    Science.gov (United States)

    Emoto, K.

    2015-12-01

    Tablet computers widely used in recent years. The performance of the tablet computer is improving year by year. Some of them have performance comparable to the personal computer of a few years ago with respect to the calculation speed and the memory size. The convenience and the intuitive operation are the advantage of the tablet computer compared to the desktop PC. I developed the iPad application of the numerical simulation of the seismic wave propagation. The numerical simulation is based on the 2D finite difference method with the staggered-grid scheme. The number of the grid points is 512 x 384 = 196,608. The grid space is 200m in both horizontal and vertical directions. That is the calculation area is 102km x 77km. The time step is 0.01s. In order to reduce the user waiting time, the image of the wave field is drawn simultaneously with the calculation rather than playing the movie after the whole calculation. P and S wave energies are plotted on the screen every 20 steps (0.2s). There is the trade-off between the smooth simulation and the resolution of the wave field image. In the current setting, it takes about 30s to calculate the 10s wave propagation (50 times image updates). The seismogram at the receiver is displayed below of the wave field updated in real time. The default medium structure consists of 3 layers. The layer boundary is defined by 10 movable points with linear interpolation. Users can intuitively change to the arbitrary boundary shape by moving the point. Also users can easily change the source and the receiver positions. The favorite structure can be saved and loaded. For the advance simulation, users can introduce the random velocity fluctuation whose spectrum can be changed to the arbitrary shape. By using this application, everyone can simulate the seismic wave propagation without the special knowledge of the elastic wave equation. So far, the Japanese version of the application is released on the App Store. Now I am preparing the

  12. Adaptive management theory development in the field of education.

    Directory of Open Access Journals (Sweden)

    Borova T.A.

    2011-07-01

    Full Text Available The article studies the development of the adaptive management theory in education systems. The methodology of the adaptive management theory is discussed and the essential characteristics of adaptive management theory that have changed in the context of higher education are highlighted. The comparative analysis of the adaptive management theory components in the field of education of adaptive management Ukrainian and Russian schools is carried out. Application of adaptive management theoretical grounds allows a smooth transition to teaching according to the humanistic principles.

  13. Seismic Physical Modeling Technology and Its Applications

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduces the seismic physical modeling technology in the CNPC Key Lab of Geophysical Exploration. It includes the seismic physical model positioning system, the data acquisition system, sources, transducers,model materials, model building techniques, precision measurements of model geometry, the basic principles of the seismic physical modeling and experimental methods, and two physical model examples.

  14. Seismic processing in the inverse data space

    NARCIS (Netherlands)

    Berkhout, A.J.

    2006-01-01

    Until now, seismic processing has been carried out by applying inverse filters in the forward data space. Because the acquired data of a seismic survey is always discrete, seismic measurements in the forward data space can be arranged conveniently in a data matrix (P). Each column in the data matrix

  15. Simplified seismic performance assessment and implications for seismic design

    Science.gov (United States)

    Sullivan, Timothy J.; Welch, David P.; Calvi, Gian Michele

    2014-08-01

    The last decade or so has seen the development of refined performance-based earthquake engineering (PBEE) approaches that now provide a framework for estimation of a range of important decision variables, such as repair costs, repair time and number of casualties. This paper reviews current tools for PBEE, including the PACT software, and examines the possibility of extending the innovative displacement-based assessment approach as a simplified structural analysis option for performance assessment. Details of the displacement-based s+eismic assessment method are reviewed and a simple means of quickly assessing multiple hazard levels is proposed. Furthermore, proposals for a simple definition of collapse fragility and relations between equivalent single-degree-of-freedom characteristics and multi-degree-of-freedom story drift and floor acceleration demands are discussed, highlighting needs for future research. To illustrate the potential of the methodology, performance measures obtained from the simplified method are compared with those computed using the results of incremental dynamic analyses within the PEER performance-based earthquake engineering framework, applied to a benchmark building. The comparison illustrates that the simplified method could be a very effective conceptual seismic design tool. The advantages and disadvantages of the simplified approach are discussed and potential implications of advanced seismic performance assessments for conceptual seismic design are highlighted through examination of different case study scenarios including different structural configurations.

  16. Expanding Conventional Seismic Stratigrphy into the Multicomponent Seismic Domain

    Energy Technology Data Exchange (ETDEWEB)

    Innocent Aluka

    2008-08-31

    Multicomponent seismic data are composed of three independent vector-based seismic wave modes. These wave modes are, compressional mode (P), and shear modes SV and SH. The three modes are generated using three orthogonal source-displacement vectors and then recorded using three orthogonal vector sensors. The components travel through the earth at differing velocities and directions. The velocities of SH and SV as they travel through the subsurface differ by only a few percent, but the velocities of SV and SH (Vs) are appreciably lower than the P-wave velocity (Vp). The velocity ratio Vp/Vs varies by an order of magnitude in the earth from a value of 15 to 1.5 depending on the degree of sedimentary lithification. The data used in this study were acquired by nine-component (9C) vertical seismic profile (VSP), using three orthogonal vector sources. The 9C vertical seismic profile is capable of generating P-wave mode and the fundamental S-wave mode (SH-SH and SV-SV) directly at the source station and permits the basic components of elastic wavefield (P, SH-SH and SV-SV) to be separated from one another for the purposes of imaging. Analysis and interpretations of data from the study area show that incident full-elastic seismic wavefield is capable of reflecting four different wave modes, P, SH , SV and C which can be utilized to fully understand the architecture and heterogeneities of geologic sequences. The conventional seismic stratigraphy utilizes only reflected P-wave modes. The notation SH mode is the same as SH-SH; SV mode means SV-SV and C mode which is a converted shear wave is a special SV mode and is the same as P-SV. These four wave modes image unique geologic stratigraphy and facies and at the same time reflect independent stratal surfaces because of the unique orientation of their particle-displacement vectors. As a result of the distinct orientation of individual mode's particle-displacement vector, one mode may react to a critical subsurface sequence

  17. Induced seismicity and carbon storage: Risk assessment and mitigation strategies

    Energy Technology Data Exchange (ETDEWEB)

    White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Foxall, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bachmann, Corinne [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chiaramonte, Laura [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Daley, Thomas M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-28

    assessment and mitigation approach. A phased approach to risk management is then introduced. The basic goal of the phased approach is to constantly adapt site operations to current conditions and available characterization data. The remainder of the report then focuses in detail on different components of the monitoring, risk assessment, and mitigation strategies. Issues in current seismic risk assessment methods that must be modified to address induce seismicity are highlighted. The report then concludes with several specific recommendations for operators and regulatory authorities to consider when selecting, permitting, and operating a storage project.

  18. Seismicity and fluid injections: numerical modelling of fault activation

    Science.gov (United States)

    Murphy, S.; O'Brien, G.; Bean, C.; McCloskey, J.; Nalbant, S.

    2012-04-01

    Injection of fluid into the subsurface is a common technique and is used to optimise returns from hydrocarbon plays (e.g. enhanced oil recovery, hydrofacturing of shales) and geothermal sites as well as for the sequestering carbon dioxide. While it is well understood that stress perturbations caused by fluid injections can induce/trigger earthquakes; the modelling of such hazard is still in its infancy. By combining fluid flow and seismicity simulations we have created a numerical model for investigating induced seismicity over large time periods so that we might examine the role of operational and geological factors in seismogenesis around a sub-surface fluid injection. In our model, fluid injection is simulated using pore fluid movement throughout a permeable layer from a high-pressure point source using a lattice Boltzmann scheme. We can accommodate complicated geological structures in our simulations. Seismicity is modelled using a quasi-dynamic relationship between stress and slip coupled with a rate-and state friction law. By spatially varying the frictional parameters, the model can reproduce both seismic and aseismic slip. Static stress perturbations (due to either to fault cells slipping or fluid injection) are calculated using analytical solutions for slip dislocations/pressure changes in an elastic half space. An adaptive time step is used in order to increase computational efficiency and thus allow us to model hundreds of years of seismicity. As a case study, we investigate the role that relative fault - injection location plays in seismic activity. To do this we created three synthetic catalogues with only the relative location of the fault from the point of injection varying between the models. In our control model there is no injection meaning it contains only tectonically triggered events. In the other two catalogues, the injection site is placed below and adjacent to the fault respectively. The injection itself is into a permeable thin planar layer

  19. An Algorithm for Evaluating Fresnel-Zone Textural Roughness for Seismic Facies Interpretation

    Science.gov (United States)

    Di, H.; Gao, D.

    2014-12-01

    In reflection seismic interpretation, a 1-D convolutional model is commonly used to interpret amplitude variations based on the geometric ray theory assuming seismic wave to reflect at a reflection point; however, the propagation of seismic waves actually occurs in a finite zone around the geometric ray path and gets reflected from a zone known as Fresnel zone. The collected signal at the surface turns out to be the superposition of reflections from within the Fresnel zone, which is a function of texture. Generally, for a rough texture such as sandstone, the dominant reflection is from the zone margin, while for a smooth texture such as marine shale, the dominant reflection is from the zone center. Based on this concept, Fresnel-zone texture directly affects amplitude variations with offset (AVO), azimuth (AVAZ), and frequency (AVF). Here we develop a computer algorithm for evaluating Fresnel-zone textural roughness. The algorithm starts with dividing the Fresnel zone into a set of micro-zones. It then builds an initial texture model to be convolved with an extracted wavelet. By comparing the synthetic signal from a Fresnel zone to the real seismic signal within an analysis window at a target location, the model is adjusted and updated until both synthetic and real signals match best. The roughness is evaluated as the correlation coefficient between the generated texture model within the Fresnel zone and the ideal model for a rough texture medium. Our new algorithm is applied to a deep-water 3D seismic volume over offshore Angola, west Africa. The results show that a rough texture is associated with channel sands, whereas a smooth texture with marine shale.

  20. SPHGR: Smoothed-Particle Hydrodynamics Galaxy Reduction

    Science.gov (United States)

    Thompson, Robert

    2015-02-01

    SPHGR (Smoothed-Particle Hydrodynamics Galaxy Reduction) is a python based open-source framework for analyzing smoothed-particle hydrodynamic simulations. Its basic form can run a baryonic group finder to identify galaxies and a halo finder to identify dark matter halos; it can also assign said galaxies to their respective halos, calculate halo & galaxy global properties, and iterate through previous time steps to identify the most-massive progenitors of each halo and galaxy. Data about each individual halo and galaxy is collated and easy to access. SPHGR supports a wide range of simulations types including N-body, full cosmological volumes, and zoom-in runs. Support for multiple SPH code outputs is provided by pyGadgetReader (ascl:1411.001), mainly Gadget (ascl:0003.001) and TIPSY (ascl:1111.015).

  1. Smooth embeddings with Stein surface images

    CERN Document Server

    Gompf, Robert E

    2011-01-01

    A simple characterization is given of open subsets of a complex surface that smoothly perturb to Stein open subsets. As applications, complex 2-space C^2 contains domains of holomorphy (Stein open subsets) that are exotic R^4's, and others homotopy equivalent to the 2-sphere but cut out by smooth, compact 3-manifolds. Pseudoconvex embeddings of Brieskorn spheres and other 3-manifolds into complex surfaces are constructed, as are pseudoconcave holomorphic fillings (with disagreeing contact and boundary orientations). Pseudoconcave complex structures on Milnor fibers are found. A byproduct of this construction is a simple polynomial expression for the signature of the (p,q,npq-1) Milnor fiber. Akbulut corks in complex surfaces can always be chosen to be pseudoconvex or pseudoconcave submanifods. The main theorem is expressed via Stein handlebodies (possibly infinite), which are defined holomorphically in all dimensions by extending Stein theory to manifolds with noncompact boundary.

  2. Robust Metric Learning by Smooth Optimization

    CERN Document Server

    Huang, Kaizhu; Xu, Zenglin; Liu, Cheng-Lin

    2012-01-01

    Most existing distance metric learning methods assume perfect side information that is usually given in pairwise or triplet constraints. Instead, in many real-world applications, the constraints are derived from side information, such as users' implicit feedbacks and citations among articles. As a result, these constraints are usually noisy and contain many mistakes. In this work, we aim to learn a distance metric from noisy constraints by robust optimization in a worst-case scenario, to which we refer as robust metric learning. We formulate the learning task initially as a combinatorial optimization problem, and show that it can be elegantly transformed to a convex programming problem. We present an efficient learning algorithm based on smooth optimization [7]. It has a worst-case convergence rate of O(1/{\\surd}{\\varepsilon}) for smooth optimization problems, where {\\varepsilon} is the desired error of the approximate solution. Finally, our empirical study with UCI data sets demonstrate the effectiveness of ...

  3. Anisotropic properties of tracheal smooth muscle tissue.

    Science.gov (United States)

    Sarma, P A; Pidaparti, R M; Meiss, R A

    2003-04-01

    The anisotropic (directional-dependent) properties of contracting tracheal smooth muscle tissue are estimated from a computational model based on the experimental data of length-dependent stiffness. The area changes are obtained at different muscle lengths from experiments in which stimulated muscle undergoes unrestricted shortening. Then, through an interative process, the anisotropic properties are estimated by matching the area changes obtained from the finite element analysis to those derived from the experiments. The results obtained indicate that the anisotropy ratio (longitudinal stiffness to transverse stiffness) is about 4 when the smooth muscle undergoes 70% strain shortening, indicating that the transverse stiffness reduces as the longitudinal stiffness increases. It was found through a sensitivity analysis from the simulation model that the longitudinal stiffness and the in-plane shear modulus are not very sensitive as compared to major Poisson's ratio to the area changes of the muscle tissue. Copyright 2003 Wiley Periodicals, Inc.

  4. On smoothness-asymmetric null infinities

    Energy Technology Data Exchange (ETDEWEB)

    Valiente Kroon, Juan Antonio [School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS (United Kingdom)

    2006-05-21

    We discuss the existence of asymptotically Euclidean initial data sets for the vacuum Einstein field equations which would give rise (modulo an existence result for the evolution equations near spatial infinity) to developments with a past and a future null infinity of different smoothness. For simplicity, the analysis is restricted to the class of conformally flat, axially symmetric initial data sets. It is shown how the free parameters in the second fundamental form of the data can be used to satisfy certain obstructions to the smoothness of null infinity. The resulting initial data sets could be interpreted as those of some sort of (nonlinearly) distorted Schwarzschild black hole. Their developments would be that they admit a peeling future null infinity, but at the same time have a polyhomogeneous (non-peeling) past null infinity.

  5. Quantum state smoothing for classical mixtures

    CERN Document Server

    Tan, D; Mølmer, K; Murch, K W

    2016-01-01

    In quantum mechanics, wave functions and density matrices represent our knowledge about a quantum system and give probabilities for the outcomes of measurements. If the combined dynamics and measurements on a system lead to a density matrix $\\rho(t)$ with only diagonal elements in a given basis $\\{|n\\rangle\\}$, it may be treated as a classical mixture, i.e., a system which randomly occupies the basis states $|n\\rangle$ with probabilities $\\rho_{nn}(t)$. Fully equivalent to so-called smoothing in classical probability theory, subsequent probing of the occupation of the states $|n\\rangle$ improves our ability to retrodict what was the outcome of a projective state measurement at time $t$. Here, we show with experiments on a superconducting qubit that the smoothed probabilities do not, in the same way as the diagonal elements of $\\rho$, permit a classical mixture interpretation of the state of the system at the past time $t$.

  6. Protostellar outflows with Smoothed Particle Magnetohydrodynamics (SPMHD)

    CERN Document Server

    Bürzle, Florian; Stasyszyn, Federico; Dolag, Klaus; Klessen, Ralf S

    2011-01-01

    The protostellar collapse of a molecular cloud core is usually accompanied by outflow phenomena. The latter are thought to be driven by magnetorotational processes from the central parts of the protostellar disc. While several 3D AMR/nested grid studies of outflow phenomena in collapsing magnetically supercritical dense cores have been reported in the literature, so far no such simulation has been performed using the Smoothed Particle Hydrodynamics (SPH) method. This is mainly due to intrinsic numerical difficulties in handling magnetohydrodynamics within SPH, which only recently were partly resolved. In this work, we use an approach where we evolve the magnetic field via the induction equation, augmented with stability correction and divergence cleaning schemes. We consider the collapse of a rotating core of one solar mass, threaded by a weak magnetic field initially parallel to the rotation axis so that the core is magnetically supercritical. We show, that Smoothed Particle Magnetohydrodynamics (SPMHD) is a...

  7. Air flow through smooth and rough cracks

    Energy Technology Data Exchange (ETDEWEB)

    Kula, H.-G.; Sharples, S. [Sheffield Univ. (United Kingdom). Dept. of Building Science

    1994-12-31

    A series of laboratory experiments are described which investigated the effect of surface roughness on the air flow characteristics of simple, straight-through, no-bend cracks with smooth and rough internal surfaces. The crack thicknesses used in the study were 1.0, 1.5 and 2.0mm. The crack lengths, in the direction of flow, were 50.8mm and 76.2mm. For the rough cracks the roughness was simulated with two different grades of commercially available energy-cloth (grade 60 and 100). The experimental results were satisfactorily fitted to a quadratic relationship between {Delta}p and Q of the form {Delta}p = AQ + BQ{sup 2} for both the smooth and rough crack data. The effect of roughness on the reduction of air flowing through a crack is also discussed. (author)

  8. Compensating for estimation smoothing in kriging

    Science.gov (United States)

    Olea, R.A.; Pawlowsky, Vera

    1996-01-01

    Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.

  9. Just-in-Time Smoothing Through Batching

    OpenAIRE

    Wieslaw Kubiak; Mesut Yavuz

    2008-01-01

    This paper presents two methods to solve the production smoothing problem in mixed-model just-in-time (JIT) systems with large setup and processing time variability between different models the systems produce. The problem is motivated by production planning at a leading U.S. automotive pressure hose manufacturer. One method finds all Pareto-optimal solutions that minimize total production rate variation of models and work in process (WIP), and maximize system utilization and responsiveness. ...

  10. Virtual Cinematography Using Optimization and Temporal Smoothing

    OpenAIRE

    Litteneker, Alan Ulfers

    2016-01-01

    The problem of automatic virtual cinematography is often approached as an optimization problem. By identifying the extrema of an objective function matching some desired parameters, such as those common in live action photography or cinematography, a suitable camera pose or path can be automatically determined. With several constraints on function form, multiple objective functions can be combined into a single optimizable function, which can be further extended to model the smoothness of the...

  11. GLOBAL SMOOTHNESS PRESERVATION BY BIVARIATE INTERPOLATION OPERATORS

    Institute of Scientific and Technical Information of China (English)

    S.G.Gal; J.Szabados

    2003-01-01

    Extending the results of [4] in the univariate case, in this paper we prove that the bivariate interpolation polynomials of Hermite-Fejer based on the Chebyshev nodes of the first kind, those of Lagrange based on the Chebyshev nodes of second kind and ± 1, and those of bivariate Shepard operators, have the property of partial preservation of global smoothness, with respect to various bivariate moduli of continuity.

  12. Modeling Water Waves with Smoothed Particle Hydrodynamics

    Science.gov (United States)

    2013-09-30

    flows, such as undertow, longshore currents, and rip currents. APPROACH The approach is based on improving various aspects of the SPH code ...Smoothed Particle Hydrodynamics ( SPH ) is a meshless numerical method that is being developed for the study of nearshore waves and other Navy needs. The...Lagrangian nature of SPH allows the modeling of wave breaking, surf zones, ship waves, and wave-structure interaction, where the free surface becomes

  13. The Smooth Muscle of the Artery

    Science.gov (United States)

    1975-01-01

    ruling out the oossibility that depolarization is a junction potential due to rnovement. The low resting potential shown is indicative of the degree of...which is embryologically Soditm Pump in the Con- and functionally related to vascular trol ot %tscle Contrac- smooth muscle, many of the electrical...consideration a 5-hydroxytryptamine as well as histamine as being the factor that we are studying. This does not rule out the posni- bility that either

  14. Robust Filtering and Smoothing with Gaussian Processes

    OpenAIRE

    Deisenroth, Marc Peter; Turner, Ryan; Huber, Marco F.; Hanebeck, Uwe D.; Rasmussen, Carl Edward

    2012-01-01

    We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models. GPs are gaining increasing importance in signal processing, machine learning, robotics, and control for representing unknown system functions by posterior probability distributions. This modern way of "system identification" is more robust than finding p...

  15. Demosaicing by Smoothing along 1D Features

    OpenAIRE

    Ajdin, Boris; Hullin, Matthias B.; Fuchs, Christian; Seidel, Hans-Peter; Lensch, Hendrik P. A.

    2008-01-01

    Most digital cameras capture color pictures in the form of an image mosaic, recording only one color channel at each pixel position. Therefore, an interpolation algorithm needs to be applied to reconstruct the missing color information. In this paper we present a novel Bayer pattern demosaicing approach, employing stochastic global optimization performed on a pixel neighborhood. We are minimizing a newly developed cost function that increases smoothness along one-dimen...

  16. Random Walk Smooth Transition Autoregressive Models

    OpenAIRE

    2004-01-01

    This paper extends the family of smooth transition autoregressive (STAR) models by proposing a specification in which the autoregressive parameters follow random walks. The random walks in the parameters can capture structural change within a regime switching framework, but in contrast to the time varying STAR (TV-STAR) speciifcation recently introduced by Lundbergh et al (2003), structural change in our random walk STAR (RW-STAR) setting follows a stochastic process rather than a determinist...

  17. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    Science.gov (United States)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  18. On the thermodynamics of smooth muscle contraction

    Science.gov (United States)

    Stålhand, Jonas; McMeeking, Robert M.; Holzapfel, Gerhard A.

    2016-09-01

    Cell function is based on many dynamically complex networks of interacting biochemical reactions. Enzymes may increase the rate of only those reactions that are thermodynamically consistent. In this paper we specifically treat the contraction of smooth muscle cells from the continuum thermodynamics point of view by considering them as an open system where matter passes through the cell membrane. We systematically set up a well-known four-state kinetic model for the cross-bridge interaction of actin and myosin in smooth muscle, where the transition between each state is driven by forward and reverse reactions. Chemical, mechanical and energy balance laws are provided in local forms, while energy balance is also formulated in the more convenient temperature form. We derive the local (non-negative) production of entropy from which we deduce the reduced entropy inequality and the constitutive equations for the first Piola-Kirchhoff stress tensor, the heat flux, the ion and molecular flux and the entropy. One example for smooth muscle contraction is analyzed in more detail in order to provide orientation within the established general thermodynamic framework. In particular the stress evolution, heat generation, muscle shorting rate and a condition for muscle cooling are derived.

  19. Notch Signaling in Vascular Smooth Muscle Cells.

    Science.gov (United States)

    Baeten, J T; Lilly, B

    2017-01-01

    The Notch signaling pathway is a highly conserved pathway involved in cell fate determination in embryonic development and also functions in the regulation of physiological processes in several systems. It plays an especially important role in vascular development and physiology by influencing angiogenesis, vessel patterning, arterial/venous specification, and vascular smooth muscle biology. Aberrant or dysregulated Notch signaling is the cause of or a contributing factor to many vascular disorders, including inherited vascular diseases, such as cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy, associated with degeneration of the smooth muscle layer in cerebral arteries. Like most signaling pathways, the Notch signaling axis is influenced by complex interactions with mediators of other signaling pathways. This complexity is also compounded by different members of the Notch family having both overlapping and unique functions. Thus, it is vital to fully understand the roles and interactions of each Notch family member in order to effectively and specifically target their exact contributions to vascular disease. In this chapter, we will review the Notch signaling pathway in vascular smooth muscle cells as it relates to vascular development and human disease.

  20. Southern Appalachian Regional Seismic Network

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, S.C.C.; Johnston, A.C.; Chiu, J.M. [Memphis State Univ., TN (United States). Center for Earthquake Research and Information

    1994-08-01

    The seismic activity in the southern Appalachian area was monitored by the Southern Appalachian Regional Seismic Network (SARSN) since late 1979 by the Center for Earthquake Research and Information (CERI) at Memphis State University. This network provides good spatial coverage for earthquake locations especially in east Tennessee. The level of activity concentrates more heavily in the Valley and Ridge province of eastern Tennessee, as opposed to the Blue Ridge or Inner Piedmont. The large majority of these events lie between New York - Alabama lineament and the Clingman/Ocoee lineament, magnetic anomalies produced by deep-seated basement structures. Therefore SARSN, even with its wide station spacing, has been able to define the essential first-order seismological characteristics of the Southern Appalachian seismic zone. The focal depths of the southeastern U.S. earthquakes concentrate between 8 and 16 km, occurring principally beneath the Appalachian overthrust. In cross-sectional views, the average seismicity is shallower to the east beneath the Blue Ridge and Piedmont provinces and deeper to the west beneath the Valley and Ridge and the North American craton. Results of recent focal mechanism studies by using the CERI digital earthquake catalog between October, 1986 and December, 1991, indicate that the basement of the Valley and Ridge province is under a horizontal, NE-SW compressive stress. Right-lateral strike-slip faulting on nearly north-south fault planes is preferred because it agrees with the trend of the regional magnetic anomaly pattern.

  1. Seismic hazard studies in Egypt

    Science.gov (United States)

    Mohamed, Abuo El-Ela A.; El-Hadidy, M.; Deif, A.; Abou Elenean, K.

    2012-12-01

    The study of earthquake activity and seismic hazard assessment of Egypt is very important due to the great and rapid spreading of large investments in national projects, especially the nuclear power plant that will be held in the northern part of Egypt. Although Egypt is characterized by low seismicity, it has experienced occurring of damaging earthquake effect through its history. The seismotectonic sitting of Egypt suggests that large earthquakes are possible particularly along the Gulf of Aqaba-Dead Sea transform, the Subduction zone along the Hellenic and Cyprean Arcs, and the Northern Red Sea triple junction point. In addition some inland significant sources at Aswan, Dahshour, and Cairo-Suez District should be considered. The seismic hazard for Egypt is calculated utilizing a probabilistic approach (for a grid of 0.5° × 0.5°) within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for four ground motion spectral periods and for different return periods. In addition, the uniform hazard spectra for rock sites for different 25 periods, and the probabilistic hazard curves for Cairo, and Alexandria cities are graphed. The peak ground acceleration (PGA) values were found close to the Gulf of Aqaba and it was about 220 gal for 475 year return period. While the lowest (PGA) values were detected in the western part of the western desert and it is less than 25 gal.

  2. Seismic amplitude recovery with curvelets

    NARCIS (Netherlands)

    Moghaddam, P.P.; Herrmann, F.J.; Stolk, C.C.

    2007-01-01

    A non-linear singularity-preserving solution to the least-squares seismic imaging problem with sparseness and continuity constraints is proposed. The applied formalism explores curvelets as a directional frame that, by their sparsity on the image, and their invariance under the imaging operators,

  3. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-12-01

    We have developed and tested technology for a new type of direct hydrocarbon detection. The method uses inelastic rock properties to greatly enhance the sensitivity of surface seismic methods to the presence of oil and gas saturation. These methods include use of energy absorption, dispersion, and attenuation (Q) along with traditional seismic attributes like velocity, impedance, and AVO. Our approach is to combine three elements: (1) a synthesis of the latest rock physics understanding of how rock inelasticity is related to rock type, pore fluid types, and pore microstructure, (2) synthetic seismic modeling that will help identify the relative contributions of scattering and intrinsic inelasticity to apparent Q attributes, and (3) robust algorithms that extract relative wave attenuation attributes from seismic data. This project provides: (1) Additional petrophysical insight from acquired data; (2) Increased understanding of rock and fluid properties; (3) New techniques to measure reservoir properties that are not currently available; and (4) Provide tools to more accurately describe the reservoir and predict oil location and volumes. These methodologies will improve the industry's ability to predict and quantify oil and gas saturation distribution, and to apply this information through geologic models to enhance reservoir simulation. We have applied for two separate patents relating to work that was completed as part of this project.

  4. Seismic isolation for Advanced LIGO

    CERN Document Server

    Abbott, R; Allen, G; Cowley, S; Daw, E; Debra, D; Giaime, J; Hammond, G; Hammond, M; Hardham, C; How, J; Hua, W; Johnson, W; Lantz, B; Mason, K; Mittleman, R; Nichol, J; Richman, S; Rollins, J; Shoemaker, D; Stapfer, G; Stebbins, R

    2002-01-01

    The baseline design concept for a seismic isolation component of the proposed 'Advanced LIGO' detector upgrade has been developed with proof-of-principle experiments and computer models. It consists of a two-stage in-vacuum active isolation platform that is supported by an external hydraulic actuation stage. Construction is underway for prototype testing of a full-scale preliminary design.

  5. Seismicity dynamics and earthquake predictability

    Directory of Open Access Journals (Sweden)

    G. A. Sobolev

    2011-02-01

    Full Text Available Many factors complicate earthquake sequences, including the heterogeneity and self-similarity of the geological medium, the hierarchical structure of faults and stresses, and small-scale variations in the stresses from different sources. A seismic process is a type of nonlinear dissipative system demonstrating opposing trends towards order and chaos. Transitions from equilibrium to unstable equilibrium and local dynamic instability appear when there is an inflow of energy; reverse transitions appear when energy is dissipating. Several metastable areas of a different scale exist in the seismically active region before an earthquake. Some earthquakes are preceded by precursory phenomena of a different scale in space and time. These include long-term activation, seismic quiescence, foreshocks in the broad and narrow sense, hidden periodical vibrations, effects of the synchronization of seismic activity, and others. Such phenomena indicate that the dynamic system of lithosphere is moving to a new state – catastrophe. A number of examples of medium-term and short-term precursors is shown in this paper. However, no precursors identified to date are clear and unambiguous: the percentage of missed targets and false alarms is high. The weak fluctuations from outer and internal sources play a great role on the eve of an earthquake and the occurrence time of the future event depends on the collective behavior of triggers. The main task is to improve the methods of metastable zone detection and probabilistic forecasting.

  6. Adaptive Semisupervised Inference

    CERN Document Server

    Azizyan, Martin; Wasserman, Larry

    2011-01-01

    Semisupervised methods inevitably invoke some assumption that links the marginal distribution of the features to the regression function of the label. Most commonly, the cluster or manifold assumptions are used which imply that the regression function is smooth over high-density clusters or manifolds supporting the data. A generalization of these assumptions is that the regression function is smooth with respect to some density sensitive distance. This motivates the use of a density based metric for semisupervised learning. We analyze this setting and make the following contributions - (a) we propose a semi-supervised learner that uses a density-sensitive kernel and show that it provides better performance than any supervised learner if the density support set has a small condition number and (b) we show that it is possible to adapt to the degree of semi-supervisedness using data-dependent choice of a parameter that controls sensitivity of the distance metric to the density. This ensures that the semisupervis...

  7. Physics-based Probabilistic Seismic Hazard Analysis for Seismicity Induced by Fluid Injection

    Science.gov (United States)

    Foxall, W.; Hutchings, L. J.; Johnson, S.; Savy, J. B.

    2011-12-01

    Risk associated with induced seismicity (IS) is a significant factor in the design, permitting and operation of enhanced geothermal, geological CO2 sequestration and other fluid injection projects. Whereas conventional probabilistic seismic hazard and risk analysis (PSHA, PSRA) methods provide an overall framework, they require adaptation to address specific characteristics of induced earthquake occurrence and ground motion estimation, and the nature of the resulting risk. The first problem is to predict the earthquake frequency-magnitude distribution of induced events for PSHA required at the design and permitting stage before the start of injection, when an appropriate earthquake catalog clearly does not exist. Furthermore, observations and theory show that the occurrence of earthquakes induced by an evolving pore-pressure field is time-dependent, and hence does not conform to the assumption of Poissonian behavior in conventional PSHA. We present an approach to this problem based on generation of an induced seismicity catalog using numerical simulation of pressure-induced shear failure in a model of the geologic structure and stress regime in and surrounding the reservoir. The model is based on available measurements of site-specific in-situ properties as well as generic earthquake source parameters. We also discuss semi-empirical analysis to sequentially update hazard and risk estimates for input to management and mitigation strategies using earthquake data recorded during and after injection. The second important difference from conventional PSRA is that in addition to potentially damaging ground motions a significant risk associated with induce seismicity in general is the perceived nuisance caused in nearby communities by small, local felt earthquakes, which in general occur relatively frequently. Including these small, usually shallow earthquakes in the hazard analysis requires extending the ground motion frequency band considered to include the high

  8. Seismic Imaging of Sandbox Models

    Science.gov (United States)

    Buddensiek, M. L.; Krawczyk, C. M.; Kukowski, N.; Oncken, O.

    2009-04-01

    Analog sandbox simulations have been applied to study structural geological processes to provide qualitative and quantitative insights into the evolution of mountain belts and basins. These sandbox simulations provide either two-dimensional and dynamic or pseudo-three-dimensional and static information. To extend the dynamic simulations to three dimensions, we combine the analog sandbox simulation techniques with seismic physical modeling of these sandbox models. The long-term objective of this approach is to image seismic and seismological events of static and actively deforming 3D analog models. To achieve this objective, a small-scale seismic apparatus, composed of a water tank, a PC control unit including piezo-electric transducers, and a positioning system, was built for laboratory use. For the models, we use granular material such as sand and glass beads, so that the simulations can evolve dynamically. The granular models are required to be completely water saturated so that the sources and receivers are directly and well coupled to the propagating medium. Ultrasonic source frequencies (˜500 kHz) corresponding to wavelengths ˜5 times the grain diameter are necessary to be able to resolve small scale structures. In three experiments of different two-layer models, we show that (1) interfaces of layers of granular materials can be resolved depending on the interface preparation more than on the material itself. Secondly, we show that the dilation between the sand grains caused by a string that has been pulled through the grains, simulating a shear zone, causes a reflection that can be detected in the seismic data. In the third model, we perform a seismic reflection survey across a model that contains both the prepared interface and a shear zone, and apply 2D-seismic reflection processing to improve the resolution. Especially for more complex models, the clarity and penetration depth need to be improved to study the evolution of geological structures in dynamic

  9. Statistical Seismology and Induced Seismicity

    Science.gov (United States)

    Tiampo, K. F.; González, P. J.; Kazemian, J.

    2014-12-01

    While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and

  10. 3D Seismic Reflection Experiment over the Galicia Deep Basin

    Science.gov (United States)

    Sawyer, D. S.; Jordan, B.; Reston, T. J.; Minshull, T. A.; Klaeschen, D.; Ranero, C.; Shillington, D. J.; Morgan, J. K.

    2014-12-01

    In June thru September, 2013, a 3D reflection and a long offset seismic experiment were conducted at the Galicia rifted margin by investigators from the US, UK, Germany, and Spain. The 3D multichannel experiment covered 64 km by 20 km (1280 km2), using the RV Marcus Langseth. Four streamers 6 km long were deployed at 12.5 m hydrophone channel spacing. The streamers were 200 m apart. Two airgun arrays, each 3300 cu in, were fired alternately every 37.5 m, to collectively yield a 400 m wide sail line consisting of 8 CMP lines at 50 m spacing. The long offset seismic experiment included 72 short period OBS's deployed below the 3D reflection survey box. Most of the instruments recorded all the shots from the airgun array shots. The 3D seismic box covered a variety of geologic features. The Peridotite Ridge (PR), is associated with the exhumation of upper mantle rocks to the seafloor during the final stage of the continental separation between the Galicia Bank and the Grand Banks of Newfoundland. The S reflector is present below most of the continental blocks under the deep Galicia basin. S is interpreted to be a low-angle detachment fault formed late in the rifting process, and a number of rotated fault block basins and ranges containing pre and syn-rift sediments. Initial observations from stacked 3D seismic data, and samples of 2D pre-stack time migrated (PSTM) 3D seismic data show that the PR is elevated above the present seafloor in the South and not exposed through the seafloor in the North. The relative smoothness of the PR surface for the entire 20 km N-S contrasts with the more complex, shorter wavelength, faulting of the continental crustal blocks to the east. The PR does not seem to show offsets or any apparent internal structure. The PSTM dip lines show substantial improvement for the structures in the deep sedimentary basin East of the PR. These seem to extend the S reflector somewhat farther to the West. The migrated data show a substantial network of

  11. Seismically Initiated Carbon Dioxide Gas Bubble Growth in Groundwater: A Mechanism for Co-seismic Borehole Water Level Rise and Remotely Triggered Secondary Seismicity

    Science.gov (United States)

    Crews, Jackson B.

    Visualization experiments, core-scale laboratory experiments, and numerical simulations were conducted to examine the transient effect of dilational seismic wave propagation on pore fluid pressure in aquifers hosting groundwater that is near saturation with respect to dissolved carbon dioxide (CO2) gas. Groundwater can become charged with dissolved CO2 through contact with gas-phase CO2 in the Earth's crust derived from magma degasing, metamorphism, and biogenic processes. The propagation of dilational seismic waves (e.g., Rayleigh and p-waves) causes oscillation of the mean normal confining stress and pore fluid pressure. When the amplitude of the pore fluid pressure oscillation is large enough to drive the pore fluid pressure below the bubble pressure, an aqueous-to-gas-phase transition can occur in the pore space, which causes a buildup of pore fluid pressure and reduces the inter-granular effective stress under confined conditions. In visualization experiments conducted in a Hele-Shaw cell representing a smooth-walled, vertically oriented fracture, millisecond-scale pressure perturbations triggered bubble nucleation and growth lasting tens of seconds, with resulting pore fluid overpressure proportional to the magnitude of the pressure perturbation. In a Berea sandstone core flooded with initially under-saturated aqueous CO2 under conditions representative of a confined aquifer, rapid reductions in confining stress triggered transient pore pressure rise up to 0.7 MPa (100 psi) overpressure on a timescale of ~10 hours. The rate of pore pressure buildup in the first 100 seconds was proportional to the saturation with respect to dissolved CO 2 at the pore pressure minimum. Sinusoidal confining stress oscillations on a Berea sandstone core produced excess pore fluid pressure after the oscillations were terminated. Confining stress oscillations in the 0.1-0.4 MPa (15-60 psi) amplitude range and 0.05-0.30 Hz frequency band increased the pore fluid pressure by 13-60 cm

  12. ADAPT Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Diagnostics and Prognostics Testbed (ADAPT) Project Lead: Scott Poll Subject Fault diagnosis in electrical power systems Description The Advanced...

  13. Bronchoprotective effect of simulated deep inspirations in tracheal smooth muscle.

    Science.gov (United States)

    Pascoe, Christopher D; Donovan, Graham M; Bossé, Ynuk; Seow, Chun Y; Paré, Peter D

    2014-12-15

    Deep inspirations (DIs) taken before an inhaled challenge with a spasmogen limit airway responsiveness in nonasthmatic subjects. This phenomenon is called bronchoprotection and is severely impaired in asthmatic subjects. The ability of DIs to prevent a decrease in forced expiratory volume in 1 s (FEV1) was initially attributed to inhibition of airway narrowing. However, DIs taken before methacholine challenge limit airway responsiveness only when a test of lung function requiring a DI is used (FEV1). Therefore, it has been suggested that prior DIs enhance the compliance of the airways or airway smooth muscle (ASM). This would increase the strain the airway wall undergoes during the subsequent DI, which is part of the FEV1 maneuver. To investigate this phenomenon, we used ovine tracheal smooth muscle strips that were subjected to shortening elicited by acetylcholine with or without prior strain mimicking two DIs. The compliance of the shortened strip was then measured in response to a stress mimicking one DI. Our results show that the presence of "DIs" before acetylcholine-induced shortening resulted in 11% greater relengthening in response to the third DI, compared with the prior DIs. This effect, although small, is shown to be potentially important for the reopening of closed airways. The effect of prior DIs was abolished by the adaptation of ASM to either shorter or longer lengths or to a low baseline tone. These results suggest that DIs confer bronchoprotection because they increase the compliance of ASM, which, consequently, promotes greater strain from subsequent DI and fosters the reopening of closed airways.

  14. Time-dependent seismic tomography

    Science.gov (United States)

    Julian, B.R.; Foulger, G.R.

    2010-01-01

    Of methods for measuring temporal changes in seismic-wave speeds in the Earth, seismic tomography is among those that offer the highest spatial resolution. 3-D tomographic methods are commonly applied in this context by inverting seismic wave arrival time data sets from different epochs independently and assuming that differences in the derived structures represent real temporal variations. This assumption is dangerous because the results of independent inversions would differ even if the structure in the Earth did not change, due to observational errors and differences in the seismic ray distributions. The latter effect may be especially severe when data sets include earthquake swarms or aftershock sequences, and may produce the appearance of correlation between structural changes and seismicity when the wave speeds are actually temporally invariant. A better approach, which makes it possible to assess what changes are truly required by the data, is to invert multiple data sets simultaneously, minimizing the difference between models for different epochs as well as the rms arrival-time residuals. This problem leads, in the case of two epochs, to a system of normal equations whose order is twice as great as for a single epoch. The direct solution of this system would require twice as much memory and four times as much computational effort as would independent inversions. We present an algorithm, tomo4d, that takes advantage of the structure and sparseness of the system to obtain the solution with essentially no more effort than independent inversions require. No claim to original US government works Journal compilation ?? 2010 RAS.

  15. Seismic evaluation methods for existing buildings

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, B.J.

    1995-07-01

    Recent US Department of Energy natural phenomena hazards mitigation directives require the earthquake reassessment of existing hazardous facilities and general use structures. This applies also to structures located in accordance with the Uniform Building Code in Seismic Zone 0 where usually no consideration is given to seismic design, but where DOE specifies seismic hazard levels. An economical approach for performing such a seismic evaluation, which relies heavily on the use of preexistent structural analysis results is outlined below. Specifically, three different methods are used to estimate the seismic capacity of a building, which is a unit of a building complex located on a site considered low risk to earthquakes. For structures originally not seismically designed, which may not have or be able to prove sufficient capacity to meet new arbitrarily high seismic design requirement and which are located on low-seismicity sites, it may be very cost effective to perform detailed site-specific seismic hazard studies in order to establish the true seismic threat. This is particularly beneficial, to sites with many buildings and facilities to be seismically evaluated.

  16. Assessment of seismic loss dependence using copula.

    Science.gov (United States)

    Goda, Katsuichiro; Ren, Jiandong

    2010-07-01

    The catastrophic nature of seismic risk is attributed to spatiotemporal correlation of seismic losses of buildings and infrastructure. For seismic risk management, such correlated seismic effects must be adequately taken into account, since they affect the probability distribution of aggregate seismic losses of spatially distributed structures significantly, and its upper tail behavior can be of particular importance. To investigate seismic loss dependence for two closely located portfolios of buildings, simulated seismic loss samples, which are obtained from a seismic risk model of spatially distributed buildings by taking spatiotemporally correlated ground motions into account, are employed. The characterization considers a loss frequency model that incorporates one dependent random component acting as a common shock to all buildings, and a copula-based loss severity model, which facilitates the separate construction of marginal loss distribution functions and nonlinear copula function with upper tail dependence. The proposed method is applied to groups of wood-frame buildings located in southwestern British Columbia. Analysis results indicate that the dependence structure of aggregate seismic losses can be adequately modeled by the right heavy tail copula or Gumbel copula, and that for the considered example, overall accuracy of the proposed method is satisfactory at probability levels of practical interest (at most 10% estimation error of fractiles of aggregate seismic loss). The developed statistical seismic loss model may be adopted in dynamic financial analysis for achieving faster evaluation with reasonable accuracy.

  17. Seismic Risk Perception compared with seismic Risk Factors

    Science.gov (United States)

    Crescimbene, Massimo; La Longa, Federica; Pessina, Vera; Pino, Nicola Alessandro; Peruzza, Laura

    2016-04-01

    The communication of natural hazards and their consequences is one of the more relevant ethical issues faced by scientists. In the last years, social studies have provided evidence that risk communication is strongly influenced by the risk perception of people. In order to develop effective information and risk communication strategies, the perception of risks and the influencing factors should be known. A theory that offers an integrative approach to understanding and explaining risk perception is still missing. To explain risk perception, it is necessary to consider several perspectives: social, psychological and cultural perspectives and their interactions. This paper presents the results of the CATI survey on seismic risk perception in Italy, conducted by INGV researchers on funding by the DPC. We built a questionnaire to assess seismic risk perception, with a particular attention to compare hazard, vulnerability and exposure perception with the real data of the same factors. The Seismic Risk Perception Questionnaire (SRP-Q) is designed by semantic differential method, using opposite terms on a Likert scale to seven points. The questionnaire allows to obtain the scores of five risk indicators: Hazard, Exposure, Vulnerability, People and Community, Earthquake Phenomenon. The questionnaire was administered by telephone interview (C.A.T.I.) on a statistical sample at national level of over 4,000 people, in the period January -February 2015. Results show that risk perception seems be underestimated for all indicators considered. In particular scores of seismic Vulnerability factor are extremely low compared with house information data of the respondents. Other data collected by the questionnaire regard Earthquake information level, Sources of information, Earthquake occurrence with respect to other natural hazards, participation at risk reduction activities and level of involvement. Research on risk perception aims to aid risk analysis and policy-making by

  18. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  19. Identifying Reflectors in Seismic Images via Statistic and Syntactic Methods

    Directory of Open Access Journals (Sweden)

    Carlos A. Perez

    2010-04-01

    Full Text Available In geologic interpretation of seismic reflection data, accurate identification of reflectors is the foremost step to ensure proper subsurface structural definition. Reflector information, along with other data sets, is a key factor to predict the presence of hydrocarbons. In this work, mathematic and pattern recognition theory was adapted to design two statistical and two syntactic algorithms which constitute a tool in semiautomatic reflector identification. The interpretive power of these four schemes was evaluated in terms of prediction accuracy and computational speed. Among these, the semblance method was confirmed to render the greatest accuracy and speed. Syntactic methods offer an interesting alternative due to their inherently structural search method.

  20. Computer techniques to aid the interpretation of salt bodies and stratigraphy in three-dimensional seismic volumes

    Science.gov (United States)

    Hammon, William S., III

    control filtering, rescaling, and smoothing operations. Linking the application of these operations to local data configuration makes these processes edge-preserving, and improves their performance. Part 2 is a description of a workflow to highlight and interpret the edges of a salt body using new volume processing techniques. Salt boundary reflections are highlighted by isolating locally high amplitude reflections and filtering noise from the resulting sparse data volume. This isolation of salt reflections enables the application of pre-existing semi-automated interpretation techniques to the complex problem of salt interpretation. Part 3 is a description of Domain Transformation, a process to re-sample a seismic volume to create a volume of paleo-depositional surfaces. Interpreting channels, and other stratigraphic features, is much easier using this volume of paleo-depositional surfaces. Dip, and the vertical component of fault offset is removed from the volume using this procedure. Part 4 discusses the application of cloth simulation to correct for the horizontal component of fault offset in a Domain Transformed data volume. Cloth simulation, a technique adapted from Computer Graphics, is used to evenly distribute fault deformation throughout a data set. This procedure produces an output Stratal domain volume with no null zones, enabling the calculation of volume attributes in the fully-flattened Stratal domain. Part 5 reiterates the major conclusions of each previous section and goes on to discuss potential avenues of future research. A variety of possible solutions to problems encountered and shortcomings of the existing techniques are described with a brief discussion of the likelihood of success for each of these new topics of research. The overarching research question being addressed by these five sections is: "To what degree is it possible to utilize modern computer processing to aid the largely manual interpretation of 3D seismic volumes?" This question stands

  1. Smooth Fano polytopes can not be inductively constructed

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    2008-01-01

    We examine a concrete smooth Fano 5-polytope $P$ with 8 vertices with the following properties: There does not exist a smooth Fano 5-polytope $Q$ with 7 vertices such that $P$ contains $Q$, and there does not exist a smooth Fano 5-polytope $R$ with 9 vertices such that $R$ contains $P$. As the po...

  2. Exponential smoothing for financial time series data forecasting

    Directory of Open Access Journals (Sweden)

    Kuzhda, Tetyana Ivanivna

    2014-05-01

    Full Text Available The article begins with the formulation for predictive learning called exponential smoothing forecasting. The exponential smoothing is commonly applied to financial markets such as stock or bond, foreign exchange, insurance, credit, primary and secondary markets. The exponential smoothing models are useful in providing the valuable decision information for investors. Simple and double exponential smoothing models are two basic types of exponential smoothing method. The simple exponential smoothing method is suitable for financial time series forecasting for the specified time period. The simple exponential smoothing weights past observations with exponentially decreasing weights to forecast future values. The double exponential smoothing is a refinement of the simple exponential smoothing model but adds another component which takes into account any trend in the data. The double exponential smoothing is designed to address this type of data series by taking into account any trend in the data. Measurement of the forecast accuracy is described in this article. Finally, the quantitative value of the price per common share forecast using simple exponential smoothing is calculated. The applied recommendations concerning determination of the price per common share forecast using double exponential smoothing are shown in the article.

  3. Alternative Smoothing and Scaling Strategies for Weighted Composite Scores

    Science.gov (United States)

    Moses, Tim

    2014-01-01

    In this study, smoothing and scaling approaches are compared for estimating subscore-to-composite scaling results involving composites computed as rounded and weighted combinations of subscores. The considered smoothing and scaling approaches included those based on raw data, on smoothing the bivariate distribution of the subscores, on smoothing…

  4. Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans

    Science.gov (United States)

    Lencer, Rebekka; Trillenberg, Peter

    2008-01-01

    Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…

  5. MortalitySmooth: An R Package for Smoothing Poisson Counts with P-Splines

    Directory of Open Access Journals (Sweden)

    Carlo G. Camarda

    2012-07-01

    Full Text Available The MortalitySmooth package provides a framework for smoothing count data in both one- and two-dimensional settings. Although general in its purposes, the package is specifically tailored to demographers, actuaries, epidemiologists, and geneticists who may be interested in using a practical tool for smoothing mortality data over ages and/or years. The total number of deaths over a specified age- and year-interval is assumed to be Poisson-distributed, and P-splines and generalized linear array models are employed as a suitable regression methodology. Extra-Poisson variation can also be accommodated.Structured in an S3 object orientation system, MortalitySmooth has two main functions which t the data and dene two classes of objects:Mort1Dsmooth and Mort2Dsmooth. The methods for these classes (print, summary, plot, predict, and residuals are also included. These features make it easy for users to extract and manipulate the outputs.In addition, a collection of mortality data is provided. This paper provides an overview of the design, aims, and principles of MortalitySmooth, as well as strategies for applying it and extending its use.

  6. Adaptation of active tone in the mouse descending thoracic aorta under acute changes in loading.

    Science.gov (United States)

    Murtada, S-I; Lewin, S; Arner, A; Humphrey, J D

    2016-06-01

    Arteries can adapt to sustained changes in blood pressure and flow, and it is thought that these adaptive processes often begin with an altered smooth muscle cell activity that precedes any detectable changes in the passive wall components. Yet, due to the intrinsic coupling between the active and passive properties of the arterial wall, it has been difficult to delineate the adaptive contributions of active smooth muscle. To address this need, we used a novel experimental-computational approach to quantify adaptive functions of active smooth muscle in arterial rings excised from the proximal descending thoracic aorta of mice and subjected to short-term sustained circumferential stretches while stimulated with various agonists. A new mathematical model of the adaptive processes was derived and fit to data to describe and predict the effects of active tone adaptation. It was found that active tone was maintained when the artery was adapted close to the optimal stretch for maximal active force production, but it was reduced when adapted below the optimal stretch; there was no significant change in passive behavior in either case. Such active adaptations occurred only upon smooth muscle stimulation with phenylephrine, however, not stimulation with KCl or angiotensin II. Numerical simulations using the proposed model suggested further that active tone adaptation in vascular smooth muscle could play a stabilizing role for wall stress in large elastic arteries.

  7. Delineation of seismic source zones based on seismicity parameters and probabilistic evaluation of seismic hazard using logic tree approach

    Indian Academy of Sciences (India)

    K S Vipin; T G Sitharam

    2013-06-01

    The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.

  8. Prediction of rock falls properties thanks to emitted seismic signal.

    Science.gov (United States)

    Bachelet, V.; Mangeney, A.; de Rosny, J.; Toussaint, R.; Farin, M.

    2015-12-01

    The seismic signal generated by rockfalls, landslides or avalanches provides an unique tool to detect, characterize and monitor gravitational flow activity, with strong implication in terms of natural hazards. Indeed, as natural flows travel down the slope, they apply stresses on top of the Earth surface, generating seismic waves in a wide frequency band, associated to the different physical processes involved. Our aim is to deduce the granular flow properties from the generated signal. It is addressed here with both laboratory experiments and simulations. In practice, regarding the experimental part, a set-up using a combination of optical and acoustic methods is employed, in order to measure the seismic signal generated by, (i) the impact of beads of different properties, (ii) the collapse of granular columns, over horizontal and sloping substrates. The substrates are made of plates and blocs of different sizes and characteristics. For the first point (i), Farin et al. [2015] have showed that it exists a link between the properties of an impacting bead (mass and velocity) on smooth surfaces and the emitted signal (radiated elastic energy and mean frequency). This demonstrate that it is possible to deduce the impactor properties thanks to the emitted signal. We show here that it is slightly different for rough and erodible surfaces, because of more dissipative processes engaged (friction, grain reorganization, etc). The point (ii) is different from multiple single impacts. We compare experimental situation to a Discrete Elements Method simulation developed by Patrick Richard (IFSTTAR). It computes trajectories of each particle of a granular column collapses, using collisions forces from simplified Hertz's contact model (spring + dashpot) and Verlet's algorithm. We used it to compute synthetic signal generated by the impacts. If the dynamics of beads is well reproduced, waves are different, confirming that "more is different".

  9. Climate adaptation

    Science.gov (United States)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  10. Cursory seismic drift assessment for buildings in moderate seismicity regions

    Institute of Scientific and Technical Information of China (English)

    Zhu Yong; R.K.L. Su; Zhou Fulin

    2007-01-01

    This paper outlines a methodology to assess the seismic drift of reinforced concrete buildings with limited structural and geotechnical information. Based on the latest and the most advanced research on predicting potential near-field and far field earthquakes affecting Hong Kong, the engineering response spectra for both rock and soil sites are derived. A new step-by-step procedure for displacement-based seismic hazard assessment of building structures is proposed to determine the maximum inter-storey drift demand for reinforced concrete buildings. The primary information required for this assessment is only the depth of the soft soil above bedrock and the height of the building. This procedure is further extended to assess the maximum chord rotation angle demand for the coupling beam of coupled shear wall or frame wall structures, which may be very critical when subjected to earthquake forces. An example is provided to illustrate calibration of the assessment procedure by using actual engineering structural models.

  11. Seismic damage and destructive potential of seismic events

    Directory of Open Access Journals (Sweden)

    S. M. Petrazzuoli

    1995-06-01

    Full Text Available This paper has been written within a research framework investigating the destructive potential of seismic events. The elastic response spectra seem insufficient to explain the behaviour of structures subject to large earthquakes in which they experience extensive plastic deformations. Recent works emphasise that there were many difficulties in the definition of a single pararneter linked to the destl-uctive potential of an earthquake. In this work a study on the effect of frequency content on structural damage has been carried out. The behaviour of two different elastoplastic oscillators has been analysed, considering several artificial earthquakes. The results obtained suggest a method for evaluating the destructive seismic potential of an earthquake through the response spectra ad the frequency content of the signal. and through the mechai~ical characteristics of the structures within the analysed area.

  12. Adaptive practical output tracking of a class of nonlinear systems

    Institute of Scientific and Technical Information of China (English)

    Qiangde WANG; Yuanwei JING; Siying ZHANG

    2004-01-01

    Focus is laid on the adaptive practical output-tracking problem of a class of nonlinear systems with high-order lower-triangular structure and uncontrollable unstable linearization. Using the modified adaptive addition of a power integrator technique as a basic tool, a new smooth adaptive state feedback controller is designed. This controller can ensure all signals of the closed-loop systems are globally bounded and output tracking error is arbitrary small.

  13. Evaluation of Jumping and Creeping Regularization Approaches Applied to 3D Seismic Tomography

    Science.gov (United States)

    Liu, M.; Ramachandran, K.

    2011-12-01

    Regularization deals with the ill-posedness of the inverse problem. The under-determined part of the problem is controlled by providing a priori knowledge on the physical solution in the form of additional constraints that the solution must satisfy. The final model is constrained to fit the data and also to satisfy some additional property. In seismic tomography, this property is selected such that the final model is as smooth as possible. This concept is physically meaningful as smooth models are sought that include only structure that is required to fit the data according to its uncertainty. The motivation for seeking a smooth model is that features present in the model should be essential to match the observations. Such a class of models is referred to as minimum structure models. The amount of structure in the estimated model parameters is measured in terms of roughness. In seismic tomography, second spatial derivatives are generally employed to quantify the model roughness. In this kind of regularized inversion, an objective function is minimized which includes norms that measure model roughness and data misfit. A tradeoff parameter is selected that provides the model with the least structure for a given level of data misfit. The regularized inverse problem that solves for model perturbation and also constrains perturbation flatness or smoothness during the inversion is known as creeping approach. The disadvantage of the creeping approach is that the final model will have no special properties and will be just a sum of smooth deviations added to the starting model. The regularized inverse problem that solves for model perturbation and also constrains model properties during the inversion is known as creeping approach. In the jumping approach, the final model can be constructed to have properties such as flatness or smoothness, since the regularization implements smoothing constraints on the model and not on the perturbation. The jumping and creeping approaches

  14. Escoamento uniforme em canais circulares lisos. Parte I: adaptação e validação do método de Kazemipour Uniform flow in smooth circular channels. Part I: adaptation and validation of the Kazemipour method

    Directory of Open Access Journals (Sweden)

    Maurício C. Goldfarb

    2004-12-01

    Full Text Available A partir da equação de von Karman Prandtl para tubos pressurizados, Kazemipour & Apelt (1980 desenvolveram uma metodologia para cálculo do escoamento em canais circulares lisos, denominada método de Kazemipour o qual, apesar de apresentar resultados de bastante eficiência necessita, no entanto, de recursos gráficos na sua aplicação, o que impossibilita a solução através de métodos computacionais e, também, a comparação deste com outras metodologias existentes. Neste trabalho, mostram-se os resultados da investigação analítica que resulta na validação do método de Kazemipour, como também o ajuste, de acordo com o procedimento proposto por Silva & Figueiredo (1993, de maneira a tornar o procedimento completamente equacionável sem a necessidade de recursos gráficos. O resultado encontrado é satisfatório e sua aplicação é apresentada num exemplo de aplicação prática.Considering the von Karman Prandtl equation for pressurized tubes, Kazemipour & Apelt (1980 developed a methodology for flow calculation in smooth circular channels, denominated as method of Kazemipour. Inspite of good results, the Kazemipour method needs graphic tools in its application, which makes its solution through computational methods and comparison to other existing methodologies difficult. In this research, the results of the analytic investigation that provides the validation of the Kazemipour method are shown, as well as the adjustments according to procedure proposed by Silva & Figueiredo (1993, performed in such a way to make the procedure independent of graphic tools. The result obtained is satisfactory and its use is presented in an example of practical application.

  15. Application of variational mode decomposition to seismic random noise reduction

    Science.gov (United States)

    Liu, Wei; Cao, Siyuan; Wang, Zhiming

    2017-08-01

    We have proposed a new denoising method for the simultaneous noise reduction and preservation of seismic signals based on variational mode decomposition (VMD). VMD is a recently developed adaptive signal decomposition method and an advance in non-stationary signal analysis. It solves the mode-mixing and non-optimal reconstruction performance problems of empirical mode decomposition that have existed for a long time. By using VMD, a multi-component signal can be non-recursively decomposed into a series of quasi-orthogonal intrinsic mode functions (IMFs), each of which has a relatively local frequency range. Meanwhile, the signal will focus on a smaller number of obtained IMFs after decomposition, and thus the denoised result is able to be obtained by reconstructing these signal-dominant IMFs. Synthetic examples are given to demonstrate the effectiveness of the proposed approach and comparison is made with the complete ensemble empirical mode decomposition, which demonstrates that the VMD algorithm has lower computational cost and better random noise elimination performance. The application of on field seismic data further illustrates the superior performance of our method in both random noise attenuation and the recovery of seismic events.

  16. Fully probabilistic seismic source inversion – Part 1: Efficient parameterisation

    Directory of Open Access Journals (Sweden)

    S. C. Stähler

    2014-11-01

    Full Text Available Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters themselves but also estimates of their uncertainties are of great practical importance. Probabilistic source inversion (Bayesian inference is very adapted to this challenge, provided that the parameter space can be chosen small enough to make Bayesian sampling computationally feasible. We propose a framework for PRobabilistic Inference of Seismic source Mechanisms (PRISM that parameterises and samples earthquake depth, moment tensor, and source time function efficiently by using information from previous non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible.

  17. Reconstruction of a 2D seismic wavefield by seismic gradiometry

    Science.gov (United States)

    Maeda, Takuto; Nishida, Kiwamu; Takagi, Ryota; Obara, Kazushige

    2016-12-01

    We reconstructed a 2D seismic wavefield and obtained its propagation properties by using the seismic gradiometry method together with dense observations of the Hi-net seismograph network in Japan. The seismic gradiometry method estimates the wave amplitude and its spatial derivative coefficients at any location from a discrete station record by using a Taylor series approximation. From the spatial derivatives in horizontal directions, the properties of a propagating wave packet, including the arrival direction, slowness, geometrical spreading, and radiation pattern can be obtained. In addition, by using spatial derivatives together with free-surface boundary conditions, the 2D vector elastic wavefield can be decomposed into divergence and rotation components. First, as a feasibility test, we performed an analysis with a synthetic seismogram dataset computed by a numerical simulation for a realistic 3D medium and the actual Hi-net station layout. We confirmed that the wave amplitude and its spatial derivatives were very well-reproduced for period bands longer than 25 s. Applications to a real large earthquake showed that the amplitude and phase of the wavefield were well reconstructed, along with slowness vector. The slowness of the reconstructed wavefield showed a clear contrast between body and surface waves and regional non-great-circle-path wave propagation, possibly owing to scattering. Slowness vectors together with divergence and rotation decomposition are expected to be useful for determining constituents of observed wavefields in inhomogeneous media.

  18. A VARIATIONAL IMAGE SMOOTHING MODEL WITH GLOBAL CONVERGENCE%具有全局收敛性的图像平滑变分模型

    Institute of Scientific and Technical Information of China (English)

    王贵; 管志成

    2004-01-01

    A new smoothing method is proposed. The smoothing process adapts to image characteristics and is good at preserving local image structures. More importantly, in the theory under the conditions weaker than those in the original Kacanov method an approximal sequence of solutions to the variational problems can be constructed and the global convergence can be proved. And the conditions in the papers of Schnorr (1994) and Heers, et al (2001) are discussed. Numerical solutions of the model are given.

  19. Mutually beneficial relationship in optimization between search-space smoothing and stochastic search

    Science.gov (United States)

    Hasegawa, Manabu; Hiramatsu, Kotaro

    2013-10-01

    The effectiveness of the Metropolis algorithm (MA) (constant-temperature simulated annealing) in optimization by the method of search-space smoothing (SSS) (potential smoothing) is studied on two types of random traveling salesman problems. The optimization mechanism of this hybrid approach (MASSS) is investigated by analyzing the exploration dynamics observed in the rugged landscape of the cost function (energy surface). The results show that the MA can be successfully utilized as a local search algorithm in the SSS approach. It is also clarified that the optimization characteristics of these two constituent methods are improved in a mutually beneficial manner in the MASSS run. Specifically, the relaxation dynamics generated by employing the MA work effectively even in a smoothed landscape and more advantage is taken of the guiding function proposed in the idea of SSS; this mechanism operates in an adaptive manner in the de-smoothing process and therefore the MASSS method maintains its optimization function over a wider temperature range than the MA.

  20. A new smoothing scheme for mathematical programs with complementarity constraints

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider a mathematical program with complementarity constraints (MPCC). We present a new smoothing scheme for this problem, which makes the primal structure of the complementarity part unchanged mostly. For the new smoothing problem, we show that the linear independence constraint qualification (LICQ) holds under some conditions. We also analyze the convergence behavior of the smoothing problem, and get some sufficient conditions such that an accumulation point of stationary points of the smoothing problems is C (M, B)-stationarity respectively. Based on the smoothing problem, we establish an algorithm to solve the primal MPCC problem. Some numerical experiments are given in the paper.

  1. Airway smooth muscle growth in asthma: proliferation, hypertrophy, and migration.

    Science.gov (United States)

    Bentley, J Kelley; Hershenson, Marc B

    2008-01-01

    Increased airway smooth muscle mass is present in fatal and non-fatal asthma. However, little information is available regarding the cellular mechanism (i.e., hyperplasia vs. hypertrophy). Even less information exists regarding the functional consequences of airway smooth muscle remodeling. It would appear that increased airway smooth muscle mass would tend to increase airway narrowing and airflow obstruction. However, the precise effects of increased airway smooth muscle mass on airway narrowing are not known. This review will consider the evidence for airway smooth muscle cell proliferation and hypertrophy in asthma, potential functional effects, and biochemical mechanisms.

  2. Seismic Isolation Working Meeting Gap Analysis Report

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Justin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sabharwall, Piyush [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    The ultimate goal in nuclear facility and nuclear power plant operations is operating safety during normal operations and maintaining core cooling capabilities during off-normal events including external hazards. Understanding the impact external hazards, such as flooding and earthquakes, have on nuclear facilities and NPPs is critical to deciding how to manage these hazards to expectable levels of risk. From a seismic risk perspective the goal is to manage seismic risk. Seismic risk is determined by convolving the seismic hazard with seismic fragilities (capacity of systems, structures, and components (SSCs)). There are large uncertainties associated with evolving nature of the seismic hazard curves. Additionally there are requirements within DOE and potential requirements within NRC to reconsider updated seismic hazard curves every 10 years. Therefore opportunity exists for engineered solutions to manage this seismic uncertainty. One engineered solution is seismic isolation. Current seismic isolation (SI) designs (used in commercial industry) reduce horizontal earthquake loads and protect critical infrastructure from the potentially destructive effects of large earthquakes. The benefit of SI application in the nuclear industry is being recognized and SI systems have been proposed, in the American Society of Civil Engineers (ASCE) 4 standard, to be released in 2014, for Light Water Reactors (LWR) facilities using commercially available technology. However, there is a lack of industry application to the nuclear industry and uncertainty with implementing the procedures outlined in ASCE-4. Opportunity exists to determine barriers associated with implementation of current ASCE-4 standard language.

  3. Beilinson's Hodge conjecture for smooth varieties

    CERN Document Server

    de Jeu, Rob

    2011-01-01

    Consider the cycle class map cl_{r,m} : CH^r(U,m;\\Q) \\to \\Gamma H^{2r-m}(U,\\Q(r)), where CH^r(U,m;\\Q) is Bloch's higher Chow group (tensored with \\Q) of a smooth complex quasi-projective variety U, and H^{2r-m}(U,\\Q(r)) is singular cohomology. We study the image of cl_{r,m} in terms of kernels of Abel-Jacobi maps. When r=m, we deduce from the Bloch-Kato theorem that the cokernel of cl_{r,m} at the generic point of U is the same for integral or rational coefficients.

  4. Method for producing smooth inner surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  5. Smooth Nanowire/Polymer Composite Transparent Electrodes

    KAUST Repository

    Gaynor, Whitney

    2011-04-29

    Smooth composite transparent electrodes are fabricated via lamination of silver nanowires into the polymer poly-(4,3-ethylene dioxythiophene): poly(styrene-sulfonate) (PEDOT:PSS). The surface roughness is dramatically reduced compared to bare nanowires. High-efficiency P3HT:PCBM organic photovoltaic cells can be fabricated using these composites, reproducing the performance of cells on indium tin oxide (ITO) on glass and improving the performance of cells on ITO on plastic. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Workshop on advances in smooth particle hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Wingate, C.A.; Miller, W.A.

    1993-12-31

    This proceedings contains viewgraphs presented at the 1993 workshop held at Los Alamos National Laboratory. Discussed topics include: negative stress, reactive flow calculations, interface problems, boundaries and interfaces, energy conservation in viscous flows, linked penetration calculations, stability and consistency of the SPH method, instabilities, wall heating and conservative smoothing, tensors, tidal disruption of stars, breaking the 10,000,000 particle limit, modelling relativistic collapse, SPH without H, relativistic KSPH avoidance of velocity based kernels, tidal compression and disruption of stars near a supermassive rotation black hole, and finally relativistic SPH viscosity and energy.

  7. Compressive Sensing via Nonlocal Smoothed Rank Function.

    Science.gov (United States)

    Fan, Ya-Ru; Huang, Ting-Zhu; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction.

  8. Impact modeling with Smooth Particle Hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Stellingwerf, R.F.; Wingate, C.A.

    1993-07-01

    Smooth Particle Hydrodynamics (SPH) can be used to model hypervelocity impact phenomena via the addition of a strength of materials treatment. SPH is the only technique that can model such problems efficiently due to the combination of 3-dimensional geometry, large translations of material, large deformations, and large void fractions for most problems of interest. This makes SPH an ideal candidate for modeling of asteroid impact, spacecraft shield modeling, and planetary accretion. In this paper we describe the derivation of the strength equations in SPH, show several basic code tests, and present several impact test cases with experimental comparisons.

  9. Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

    Directory of Open Access Journals (Sweden)

    Najme Maleki

    2014-07-01

    Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

  10. Discussion about the relationship between seismic belt and seismic statistical zone

    Institute of Scientific and Technical Information of China (English)

    潘华; 金严; 胡聿贤

    2003-01-01

    This paper makes a summary of status of delimitation of seismic zones and belts of China firstly in aspects of studying history, purpose, usage, delimiting principles, various presenting forms and main specialties. Then the viewpoints are emphasized, making geographical divisions by seismicity is just the most important purpose of delimiting seismic belts and the concept of seismic belt is also quite different from that of seismic statistical zone used in CPSHA method. The concept of seismic statistical zone and its history of evolvement are introduced too. Large differences between these two concepts exist separately in their statistical property, actual meaning, gradation, required scale, and property of refusing to overlap each other, aim and usage of delimitation. But in current engineering practice, these two concepts are confused. On the one hand, it causes no fit theory for delimiting seismic statistical zone in PSHA to be set up; on the other hand, researches about delimitation of seismic belts with purposes of seismicity zoning and studying on structural environment, mechanism of earthquake generating also pause to go ahead. Major conclusions are given in the end of this paper, that seismic statistical zone bases on the result of seismic belt delimiting, it only arises in and can be used in the especial PSHA method of China with considering spatially and temporally inhomogeneous seismic activities, and its concept should be clearly differentiated from the concept of seismic belt.

  11. Seismic techniques in coal mining

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharyya, A.K.; Belleza, G.V.

    1983-01-01

    The aim of this study is to investigate the peripheral fracture zones in coal pillars left for support in underground mines. The fracture zones are caused by the redistribution of stresses in strata resulting from the process of excavation and blasting if it is used. The extent and degree of these fracture zones, in turn, have a direct influence on the ability of pillars to provide stable support to the overlying strata. Seismic methods such as refraction, uphole, and collinear techniques outlined in previous reports are being used to investigate the extent and degree of the peripheral fracture zones. Some of the work that has been carried out and is described in this report, relates to the study of peripheral fracture zones in coal pillars using seismic techniques.

  12. Statistical Physics Approaches to Seismicity

    CERN Document Server

    Sornette, D

    2008-01-01

    This entry in the Encyclopedia of Complexity and Systems Science, Springer present a summary of some of the concepts and calculational tools that have been developed in attempts to apply statistical physics approaches to seismology. We summarize the leading theoretical physical models of the space-time organization of earthquakes. We present a general discussion and several examples of the new metrics proposed by statistical physicists, underlining their strengths and weaknesses. The entry concludes by briefly outlining future directions. The presentation is organized as follows. I Glossary II Definition and Importance of the Subject III Introduction IV Concepts and Calculational Tools IV.1 Renormalization, Scaling and the Role of Small Earthquakes in Models of Triggered Seismicity IV.2 Universality IV.3 Intermittent Periodicity and Chaos IV.4 Turbulence IV.5 Self-Organized Criticality V Competing mechanisms and models V.1 Roots of complexity in seismicity: dynamics or heterogeneity? V.2 Critical earthquakes ...

  13. Tube-wave seismic imaging

    Science.gov (United States)

    Korneev, Valeri A [LaFayette, CA

    2009-05-05

    The detailed analysis of cross well seismic data for a gas reservoir in Texas revealed two newly detected seismic wave effects, recorded approximately 2000 feet above the reservoir. A tube-wave (150) is initiated in a source well (110) by a source (111), travels in the source well (110), is coupled to a geological feature (140), propagates (151) through the geological feature (140), is coupled back to a tube-wave (152) at a receiver well (120), and is and received by receiver(s) (121) in either the same (110) or a different receiving well (120). The tube-wave has been shown to be extremely sensitive to changes in reservoir characteristics. Tube-waves appear to couple most effectively to reservoirs where the well casing is perforated, allowing direct fluid contact from the interior of a well case to the reservoir.

  14. DISPLACEMENT BASED SEISMIC DESIGN CRITERIA

    Energy Technology Data Exchange (ETDEWEB)

    HOFMAYER,C.H.

    1999-03-29

    The USNRC has initiated a project to determine if any of the likely revisions to traditional earthquake engineering practice are relevant to seismic design of the specialized structures, systems and components of nuclear power plants and of such significance to suggest that a change in design practice might be warranted. As part of the initial phase of this study, a literature survey was conducted on the recent changes in seismic design codes/standards, on-going activities of code-writing organizations/communities, and published documents on displacement-based design methods. This paper provides a summary of recent changes in building codes and on-going activities for future codes. It also discusses some technical issues for further consideration.

  15. Displacement Based Seismic Design Criteria

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hofmayer, C.; Park, Y.J.

    1999-03-29

    The USNRC has initiated a project to determine if any of the likely revisions to traditional earthquake engineering practice are relevant to seismic design of the specialized structures, systems and components of nuclear power plants and of such significance to suggest that a change in design practice might be warranted. As part of the initial phase of this study, a literature survey was conducted on the recent changes in seismic design codes/standards, on-going activities of code-writing organizations/communities, and published documents on displacement-based design methods. This paper provides a summary of recent changes in building codes and on-going activities for future codes. It also discusses some technical issues for further consideration.

  16. An economical educational seismic system

    Science.gov (United States)

    Lehman, J. D.

    1980-01-01

    There is a considerable interest in seismology from the nonprofessional or amateur standpoint. The operation of a seismic system can be satisfying and educational, especially when you have built and operated the system yourself. A long-period indoor-type sensor and recording system that works extremely well has been developed in the James Madison University Physics Deparment. The system can be built quite economically, and any educational institution that cannot commit themselves to a professional installation need not be without first-hand seismic information. The system design approach has been selected by college students working a project or senior thesis, several elementary and secondary science teachers, as well as the more ambitious tinkerer or hobbyist at home 

  17. A comparison of long-term changes in seismicity at The Geysers, Salton Sea, and Coso geothermal fields

    Science.gov (United States)

    Trugman, Daniel T.; Shearer, Peter M.; Borsa, Adrian A.; Fialko, Yuri

    2016-01-01

    Geothermal energy is an important source of renewable energy, yet its production is known to induce seismicity. Here we analyze seismicity at the three largest geothermal fields in California: The Geysers, Salton Sea, and Coso. We focus on resolving the temporal evolution of seismicity rates, which provides important observational constraints on how geothermal fields respond to natural and anthropogenic loading. We develop an iterative, regularized inversion procedure to partition the observed seismicity rate into two components: (1) the interaction rate due to earthquake-earthquake triggering and (2) the smoothly varying background rate controlled by other time-dependent stresses, including anthropogenic forcing. We apply our methodology to compare long-term changes in seismicity to monthly records of fluid injection and withdrawal. At The Geysers, we find that the background seismicity rate is highly correlated with fluid injection, with the mean rate increasing by approximately 50% and exhibiting strong seasonal fluctuations following construction of the Santa Rosa pipeline in 2003. In contrast, at both Salton Sea and Coso, the background seismicity rate has remained relatively stable since 1990, though both experience short-term rate fluctuations that are not obviously modulated by geothermal plant operation. We also observe significant temporal variations in Gutenberg-Richter b value, earthquake magnitude distribution, and earthquake depth distribution, providing further evidence for the dynamic evolution of stresses within these fields. The differing field-wide responses to fluid injection and withdrawal may reflect differences in in situ reservoir conditions and local tectonics, suggesting that a complex interplay of natural and anthropogenic stressing controls seismicity within California's geothermal fields.

  18. What is the impact of the August 24, 2016 Amatrice earthquake on the seismic hazard assessment in central Italy?

    Directory of Open Access Journals (Sweden)

    Maura Murru

    2016-11-01

    Full Text Available The recent Amatrice strong event (Mw6.0 occurred on August 24, 2016 in Central Apennines (Italy in a seismic gap zone, motivated us to study and provide better understanding of the seismic hazard assessment in the macro area defined as “Central Italy”. The area affected by the sequence is placed between the Mw6.0 1997 Colfiorito sequence to the north (Umbria-Marche region the Campotosto area hit by the 2009 L’Aquila sequence Mw6.3 (Abruzzo region to the south. The Amatrice earthquake occurred while there was an ongoing effort to update the 2004 seismic hazard map (MPS04 for the Italian territory, requested in 2015 by the Italian Civil Protection Agency to the Center for Seismic Hazard (CPS of the Istituto Nazionale di Geofisica e Vulcanologia INGV. Therefore, in this study we brought to our attention new earthquake source data and recently developed ground-motion prediction equations (GMPEs. Our aim was to validate whether the seismic hazard assessment in this area has changed with respect to 2004, year in which the MPS04 map was released. In order to understand the impact of the recent earthquakes on the seismic hazard assessment in central Italy we compared the annual seismic rates calculated using a smoothed seismicity approach over two different periods; the Parametric Catalog of the Historical Italian earthquakes (CPTI15 from 1871 to 2003 and the historical and instrumental catalogs from 1871 up to 31 August 2016. Results are presented also in terms of peak ground acceleration (PGA, using the recent ground-motion prediction equations (GMPEs at Amatrice, interested by the 2016 sequence.

  19. 2016 one-year seismic hazard forecast for the Central and Eastern United States from induced and natural earthquakes

    Science.gov (United States)

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Llenos, Andrea L.; Ellsworth, William L.; Michael, Andrew J.; Rubinstein, Justin L.; McGarr, Arthur F.; Rukstales, Kenneth S.

    2016-03-28

    The U.S. Geological Survey (USGS) has produced a 1-year seismic hazard forecast for 2016 for the Central and Eastern United States (CEUS) that includes contributions from both induced and natural earthquakes. The model assumes that earthquake rates calculated from several different time windows will remain relatively stationary and can be used to forecast earthquake hazard and damage intensity for the year 2016. This assessment is the first step in developing an operational earthquake forecast for the CEUS, and the analysis could be revised with updated seismicity and model parameters. Consensus input models consider alternative earthquake catalog durations, smoothing parameters, maximum magnitudes, and ground motion estimates, and represent uncertainties in earthquake occurrence and diversity of opinion in the science community. Ground shaking seismic hazard for 1-percent probability of exceedance in 1 year reaches 0.6 g (as a fraction of standard gravity [g]) in northern Oklahoma and southern Kansas, and about 0.2 g in the Raton Basin of Colorado and New Mexico, in central Arkansas, and in north-central Texas near Dallas. Near some areas of active induced earthquakes, hazard is higher than in the 2014 USGS National Seismic Hazard Model (NHSM) by more than a factor of 3; the 2014 NHSM did not consider induced earthquakes. In some areas, previously observed induced earthquakes have stopped, so the seismic hazard reverts back to the 2014 NSHM. Increased seismic activity, whether defined as induced or natural, produces high hazard. Conversion of ground shaking to seismic intensity indicates that some places in Oklahoma, Kansas, Colorado, New Mexico, Texas, and Arkansas may experience damage if the induced seismicity continues unabated. The chance of having Modified Mercalli Intensity (MMI) VI or greater (damaging earthquake shaking) is 5–12 percent per year in north-central Oklahoma and southern Kansas, similar to the chance of damage caused by natural earthquakes

  20. Seismic hazard studies in Egypt

    Directory of Open Access Journals (Sweden)

    Abuo El-Ela A. Mohamed

    2012-12-01

    Full Text Available The study of earthquake activity and seismic hazard assessment of Egypt is very important due to the great and rapid spreading of large investments in national projects, especially the nuclear power plant that will be held in the northern part of Egypt. Although Egypt is characterized by low seismicity, it has experienced occurring of damaging earthquake effect through its history. The seismotectonic sitting of Egypt suggests that large earthquakes are possible particularly along the Gulf of Aqaba–Dead Sea transform, the Subduction zone along the Hellenic and Cyprean Arcs, and the Northern Red Sea triple junction point. In addition some inland significant sources at Aswan, Dahshour, and Cairo-Suez District should be considered. The seismic hazard for Egypt is calculated utilizing a probabilistic approach (for a grid of 0.5° × 0.5° within a logic-tree framework. Alternative seismogenic models and ground motion scaling relationships are selected to account for the epistemic uncertainty. Seismic hazard values on rock were calculated to create contour maps for four ground motion spectral periods and for different return periods. In addition, the uniform hazard spectra for rock sites for different 25 periods, and the probabilistic hazard curves for Cairo, and Alexandria cities are graphed. The peak ground acceleration (PGA values were found close to the Gulf of Aqaba and it was about 220 gal for 475 year return period. While the lowest (PGA values were detected in the western part of the western desert and it is less than 25 gal.

  1. SEISMIC ATTENUATION FOR RESERVOIR CHARACTERIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Joel Walls; M.T. Taner; Naum Derzhi; Gary Mavko; Jack Dvorkin

    2003-10-01

    In this report we will show the fundamental concepts of two different methods to compute seismic energy absorption. The first methods gives and absolute value of Q and is based on computation with minimum phase operators. The second method gives a relative energy loss compared to a background trend. This method is a rapid, qualitative indicator of anomalous absorption and can be combined with other attributes such as band limited acoustic impedance to indicate areas of likely gas saturation.

  2. Research on the effect estimation of seismic safety evaluation

    Institute of Scientific and Technical Information of China (English)

    邹其嘉; 陶裕禄

    2004-01-01

    Seismic safety evaluation is a basic work for determining the seismic resistance requirements of major construction projects. The effect, especially the economic effect of the seismic safety evaluation has been generally concerned. The paper gives a model for estimating the effect of seismic safety evaluation and calculates roughly the economic effect of seismic safety evaluation with some examples.

  3. Seismicity of Afghanistan and vicinity

    Science.gov (United States)

    Dewey, James W.

    2006-01-01

    This publication describes the seismicity of Afghanistan and vicinity and is intended for use in seismic hazard studies of that nation. Included are digital files with information on earthquakes that have been recorded in Afghanistan and vicinity through mid-December 2004. Chapter A provides an overview of the seismicity and tectonics of Afghanistan and defines the earthquake parameters included in the 'Summary Catalog' and the 'Summary of Macroseismic Effects.' Chapter B summarizes compilation of the 'Master Catalog' and 'Sub-Threshold Catalog' and documents their formats. The 'Summary Catalog' itself is presented as a comma-delimited ASCII file, the 'Summary of Macroseismic Effects' is presented as an html file, and the 'Master Catalog' and 'Sub-Threshold Catalog' are presented as flat ASCII files. Finally, this report includes as separate plates a digital image of a map of epicenters of earthquakes occurring since 1964 (Plate 1) and a representation of areas of damage or strong shaking from selected past earthquakes in Afghanistan and vicinity (Plate 2).

  4. Seismic risk mapping for Germany

    Science.gov (United States)

    Tyagunov, S.; Grünthal, G.; Wahlström, R.; Stempniewski, L.; Zschau, J.

    2006-06-01

    The aim of this study is to assess and map the seismic risk for Germany, restricted to the expected losses of damage to residential buildings. There are several earthquake prone regions in the country which have produced Mw magnitudes above 6 and up to 6.7 corresponding to observed ground shaking intensity up to VIII-IX (EMS-98). Combined with the fact that some of the earthquake prone areas are densely populated and highly industrialized and where therefore the hazard coincides with high concentration of exposed assets, the damaging implications from earthquakes must be taken seriously. In this study a methodology is presented and pursued to calculate the seismic risk from (1) intensity based probabilistic seismic hazard, (2) vulnerability composition models, which are based on the distribution of residential buildings of various structural types in representative communities and (3) the distribution of assets in terms of replacement costs for residential buildings. The estimates of the risk are treated as primary economic losses due to structural damage to residential buildings. The obtained results are presented as maps of the damage and risk distributions. For a probability level of 90% non-exceedence in 50 years (corresponding to a mean return period of 475 years) the mean damage ratio is up to 20% and the risk up to hundreds of millions of euro in the most endangered communities. The developed models have been calibrated with observed data from several damaging earthquakes in Germany and the nearby area in the past 30 years.

  5. Building a Smartphone Seismic Network

    Science.gov (United States)

    Kong, Q.; Allen, R. M.

    2013-12-01

    We are exploring to build a new type of seismic network by using the smartphones. The accelerometers in smartphones can be used to record earthquakes, the GPS unit can give an accurate location, and the built-in communication unit makes the communication easier for this network. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. In order to build this network, we developed an application for android phones and server to record the acceleration in real time. These records can be sent back to a server in real time, and analyzed at the server. We evaluated the performance of the smartphone as a seismic recording instrument by comparing them with high quality accelerometer while located on controlled shake tables for a variety of tests, and also the noise floor test. Based on the daily human activity data recorded by the volunteers and the shake table tests data, we also developed algorithm for the smartphones to detect earthquakes from daily human activities. These all form the basis of setting up a new prototype smartphone seismic network in the near future.

  6. Seismic microzonation of Bangalore, India

    Indian Academy of Sciences (India)

    P Anbazhagan; T G Sitharam

    2008-11-01

    In the present study, an attempt has been made to evaluate the seismic hazard considering local site effects by carrying out detailed geotechnical and geophysical site characterization in Bangalore, India to develop microzonation maps. An area of 220 km2, encompassing Bangalore Mahanagara Palike (BMP) has been chosen as the study area. Seismic hazard analysis and microzonation of Bangalore are addressed in three parts: in the first part, estimation of seismic hazard is done using seismotectonic and geological information. Second part deals with site characterization using geotechnical and shallow geophysical techniques. In the last part, local site effects are assessed by carrying out one-dimensional (1-D) ground response analysis (using the program SHAKE 2000) using both standard penetration test (SPT) data and shear wave velocity data from multichannel analysis of surface wave (MASW) survey. Further, field experiments using microtremor studies have also been carried out for evaluation of predominant frequency of the soil columns. The same has been assessed using 1-D ground response analysis and compared with microtremor results. Further, the Seed and Idriss simplified approach has been adopted to evaluate the soil liquefaction susceptibility and liquefaction resistance assessment. Microzonation maps have been prepared with a scale of 1:20,000. The detailed methodology, along with experimental details, collated data, results and maps are presented in this paper.

  7. Analytic functions smooth up to the boundary

    CERN Document Server

    1988-01-01

    This research monograph concerns the Nevanlinna factorization of analytic functions smooth, in a sense, up to the boundary. The peculiar properties of such a factorization are investigated for the most common classes of Lipschitz-like analytic functions. The book sets out to create a satisfactory factorization theory as exists for Hardy classes. The reader will find, among other things, the theorem on smoothness for the outer part of a function, the generalization of the theorem of V.P. Havin and F.A. Shamoyan also known in the mathematical lore as the unpublished Carleson-Jacobs theorem, the complete description of the zero-set of analytic functions continuous up to the boundary, generalizing the classical Carleson-Beurling theorem, and the structure of closed ideals in the new wide range of Banach algebras of analytic functions. The first three chapters assume the reader has taken a standard course on one complex variable; the fourth chapter requires supplementary papers cited there. The monograph addresses...

  8. Multiscale modeling with smoothed dissipative particle dynamics.

    Science.gov (United States)

    Kulkarni, Pandurang M; Fu, Chia-Chun; Shell, M Scott; Leal, L Gary

    2013-06-21

    In this work, we consider two issues related to the use of Smoothed Dissipative Particle Dynamics (SDPD) as an intermediate mesoscale model in a multiscale scheme for solution of flow problems when there are local parts of a macroscopic domain that require molecular resolution. The first is to demonstrate that SDPD with different levels of resolution can accurately represent the fluid properties from the continuum scale all the way to the molecular scale. Specifically, while the thermodynamic quantities such as temperature, pressure, and average density remain scale-invariant, we demonstrate that the dynamic properties are quantitatively consistent with an all-atom Lennard-Jones reference system when the SDPD resolution approaches the atomistic scale. This supports the idea that SDPD can serve as a natural bridge between molecular and continuum descriptions. In the second part, a simple multiscale methodology is proposed within the SDPD framework that allows several levels of resolution within a single domain. Each particle is characterized by a unique physical length scale called the smoothing length, which is inversely related to the local number density and can change on-the-fly. This multiscale methodology is shown to accurately reproduce fluid properties for the simple problem of steady and transient shear flow.

  9. Improved Gait Classification with Different Smoothing Techniques

    Directory of Open Access Journals (Sweden)

    Hu Ng

    2011-01-01

    Full Text Available Gait as a biometric has received great attention nowadays as it can offer human identification at a distance without any contact with the feature capturing device. This is motivated by the increasing number of synchronised closed-circuit television (CCTV cameras which have been installed in many major towns, in order to monitor and prevent crime by identifying the criminal or suspect. This paper present a method to improve gait classification results by applying smoothing techniques on the extracted gait features. The proposed approach is consisted of three parts: extraction of human gait features from enhanced human silhouette, smoothing process on extracted gait features and classification by fuzzy k-nearest neighbours (KNN. The extracted gait features are height, width, crotch height, step-size of the human silhouette and joint trajectories. To improve the recognition rate, two of these extracted gait features are smoothened before the classification process in order to alleviate the effect of outliers. The proposed approach has been applied on a dataset of nine subjects walking bidirectionally on an indoor pathway with twelve different covariate factors. From the experimental results, it can be concluded that the proposed approach is effective in gait classification.

  10. Cortex phellodendri Extract Relaxes Airway Smooth Muscle

    Directory of Open Access Journals (Sweden)

    Qiu-Ju Jiang

    2016-01-01

    Full Text Available Cortex phellodendri is used to reduce fever and remove dampness and toxin. Berberine is an active ingredient of C. phellodendri. Berberine from Argemone ochroleuca can relax airway smooth muscle (ASM; however, whether the nonberberine component of C. phellodendri has similar relaxant action was unclear. An n-butyl alcohol extract of C. phellodendri (NBAECP, nonberberine component was prepared, which completely inhibits high K+- and acetylcholine- (ACH- induced precontraction of airway smooth muscle in tracheal rings and lung slices from control and asthmatic mice, respectively. The contraction induced by high K+ was also blocked by nifedipine, a selective blocker of L-type Ca2+ channels. The ACH-induced contraction was partially inhibited by nifedipine and pyrazole 3, an inhibitor of TRPC3 and STIM/Orai channels. Taken together, our data demonstrate that NBAECP can relax ASM by inhibiting L-type Ca2+ channels and TRPC3 and/or STIM/Orai channels, suggesting that NBAECP could be developed to a new drug for relieving bronchospasm.

  11. Isotropic Growth of Graphene toward Smoothing Stitching.

    Science.gov (United States)

    Zeng, Mengqi; Tan, Lifang; Wang, Lingxiang; Mendes, Rafael G; Qin, Zhihui; Huang, Yaxin; Zhang, Tao; Fang, Liwen; Zhang, Yanfeng; Yue, Shuanglin; Rümmeli, Mark H; Peng, Lianmao; Liu, Zhongfan; Chen, Shengli; Fu, Lei

    2016-07-26

    The quality of graphene grown via chemical vapor deposition still has very great disparity with its theoretical property due to the inevitable formation of grain boundaries. The design of single-crystal substrate with an anisotropic twofold symmetry for the unidirectional alignment of graphene seeds would be a promising way for eliminating the grain boundaries at the wafer scale. However, such a delicate process will be easily terminated by the obstruction of defects or impurities. Here we investigated the isotropic growth behavior of graphene single crystals via melting the growth substrate to obtain an amorphous isotropic surface, which will not offer any specific grain orientation induction or preponderant growth rate toward a certain direction in the graphene growth process. The as-obtained graphene grains are isotropically round with mixed edges that exhibit high activity. The orientation of adjacent grains can be easily self-adjusted to smoothly match each other over a liquid catalyst with facile atom delocalization due to the low rotation steric hindrance of the isotropic grains, thus achieving the smoothing stitching of the adjacent graphene. Therefore, the adverse effects of grain boundaries will be eliminated and the excellent transport performance of graphene will be more guaranteed. What is more, such an isotropic growth mode can be extended to other types of layered nanomaterials such as hexagonal boron nitride and transition metal chalcogenides for obtaining large-size intrinsic film with low defect.

  12. Smooth Tubercle Bacilli: Neglected Opportunistic Tropical Pathogens

    Directory of Open Access Journals (Sweden)

    Djaltou eAboubaker

    2016-01-01

    Full Text Available Smooth tubercle bacilli (STB including ‘‘Mycobacterium canettii’’ are members of the Mycobacterium tuberculosis complex (MTBC which cause non-contagious tuberculosis in human. This group comprises less than one hundred isolates characterized by smooth colonies and cordless organisms. Most STB isolates have been obtained from patients exposed to the Republic of Djibouti but seven isolates, including the three seminal ones obtained by Georges Canetti between 1968 and 1970, were recovered from patients in France, Madagascar, Sub-Sahara East Africa and French Polynesia. STB form a genetically heterogeneous group of MTBC organisms with large 4.48 ± 0.05 Mb genomes which may link Mycobacterium kansasii to MTBC organisms. Lack of inter-human transmission suggested a yet unknown environmental reservoir. Clinical data indicate a respiratory tract route of contamination and the digestive tract as an alternative route of contamination. Further epidemiological and clinical studies are warranted to elucidate areas of uncertainty regarding these unusual mycobacteria and the tuberculosis they cause.

  13. Importance of direct and indirect triggered seismicity

    CERN Document Server

    Helmstetter, A; Helmstetter, Agnes; Sornette, Didier

    2003-01-01

    Using the simple ETAS branching model of seismicity, which assumes that each earthquake can trigger other earthquakes, we quantify the role played by the cascade of triggered seismicity in controlling the rate of aftershock decay as well as in the overall level of seismicity in the presence of a constant external seismicity source. We show that, in this model, the proportion of triggered seismicity is equal to the proportion of secondary plus later-generation aftershocks, and is given by the average number of triggered events per earthquake. Based on these results and on the observation that a large fraction of seismicity are triggered earthquakes, we conclude that similarly a large fraction of aftershocks occurring a few hours or days after a mainshock are triggered indirectly by the mainshock.

  14. Seismic activity at the western Pyrenean edge

    Science.gov (United States)

    Ruiz, M.; Gallart, J.; Díaz, J.; Olivera, C.; Pedreira, D.; López, C.; González-Cortina, J. M.; Pulgar, J. A.

    2006-01-01

    The present-day seismicity at the westernmost part of the Pyrenean domain reported from permanent networks is of low to moderate magnitude. However, it is poorly constrained due to the scarce station coverage of the area. We present new seismic data collected from a temporary network deployed there for 17 months that provides an enhanced image of the seismic activity and its tectonic implications. Our results delineate the westward continuity of the E-W Pyrenean band of seismicity, through the Variscan Basque Massifs along the Leiza Fault, ending up at the Hendaya Fault. This seismicity belt is distributed on a crustal scale, dipping northward to almost 30 km depth. Other relevant seismic events located in the area can be related to the central segment of the Pamplona fault, and to different E-W thrust structures.

  15. Ambiguous Adaptation

    DEFF Research Database (Denmark)

    Møller Larsen, Marcus; Lyngsie, Jacob

    We investigate why some exchange relationships terminate prematurely. We argue that investments in informal governance structures induce premature termination in relationships already governed by formal contracts. The formalized adaptive behavior of formal governance structures and the flexible a...

  16. Toothbrush Adaptations.

    Science.gov (United States)

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  17. Hedonic "adaptation"

    OpenAIRE

    2008-01-01

    People live in a world in which they are surrounded by potential disgust elicitors such as ``used'' chairs, air, silverware, and money as well as excretory activities. People function in this world by ignoring most of these, by active avoidance, reframing, or adaptation. The issue is particularly striking for professions, such as morticians, surgeons, or sanitation workers, in which there is frequent contact with major disgust elicitors. In this study, we study the ``adaptation'' process to d...

  18. Strategic Adaptation

    DEFF Research Database (Denmark)

    Andersen, Torben Juul

    2015-01-01

    This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation...

  19. A switch to reduce resistivity in smoothed particle magnetohydrodynamics

    CERN Document Server

    Tricco, Terrence S

    2013-01-01

    Artificial resistivity is included in Smoothed Particle Magnetohydrodynamics simulations to capture shocks and discontinuities in the magnetic field. Here we present a new method for adapting the strength of the applied resistivity so that shocks are captured but the dissipation of the magnetic field away from shocks is minimised. Our scheme utilises the gradient of the magnetic field as a shock indicator, setting {\\alpha}_B = h|gradB|/|B|, such that resistivity is switched on only where strong discontinuities are present. The advantage to this approach is that the resistivity parameter does not depend on the absolute field strength. The new switch is benchmarked on a series of shock tube tests demonstrating its ability to capture shocks correctly. It is compared against a previous switch proposed by Price & Monaghan (2005), showing that it leads to lower dissipation of the field, and in particular, that it succeeds at capturing shocks in the regime where the Alfv\\'en speed is much less than the sound spe...

  20. Growth curve analysis for plasma profiles using smoothing splines

    Energy Technology Data Exchange (ETDEWEB)

    Imre, K.

    1993-05-01

    We are developing a profile analysis code for the statistical estimation of the parametric dependencies of the temperature and density profiles in tokamaks. Our code uses advanced statistical techniques to determine the optimal fit, i.e. the fit which minimized the predictive error. For a forty TFTR Ohmic profile dataset, our preliminary results indicate that the profile shape depends almost exclusively on q[sub a][prime] but that the shape dependencies are not Gaussian. We are now comparing various shape models on the TFTR data. In the first six months, we have completed the core modules of the code, including a B-spline package for variable knot locations, a data-based method to determine the optimal smoothing parameters, self-consistent estimation of the bias errors, and adaptive fitting near the plasma edge. Visualization graphics already include three dimensional surface plots, and discharge by discharge plots of the predicted curves with error bars together with the actual measurements values, and plots of the basis functions with errors.