TRANSIENT LUNAR PHENOMENA: REGULARITY AND REALITY
International Nuclear Information System (INIS)
Crotts, Arlin P. S.
2009-01-01
Transient lunar phenomena (TLPs) have been reported for centuries, but their nature is largely unsettled, and even their existence as a coherent phenomenon is controversial. Nonetheless, TLP data show regularities in the observations; a key question is whether this structure is imposed by processes tied to the lunar surface, or by terrestrial atmospheric or human observer effects. I interrogate an extensive catalog of TLPs to gauge how human factors determine the distribution of TLP reports. The sample is grouped according to variables which should produce differing results if determining factors involve humans, and not reflecting phenomena tied to the lunar surface. Features dependent on human factors can then be excluded. Regardless of how the sample is split, the results are similar: ∼50% of reports originate from near Aristarchus, ∼16% from Plato, ∼6% from recent, major impacts (Copernicus, Kepler, Tycho, and Aristarchus), plus several at Grimaldi. Mare Crisium produces a robust signal in some cases (however, Crisium is too large for a 'feature' as defined). TLP count consistency for these features indicates that ∼80% of these may be real. Some commonly reported sites disappear from the robust averages, including Alphonsus, Ross D, and Gassendi. These reports begin almost exclusively after 1955, when TLPs became widely known and many more (and inexperienced) observers searched for TLPs. In a companion paper, we compare the spatial distribution of robust TLP sites to transient outgassing (seen by Apollo and Lunar Prospector instruments). To a high confidence, robust TLP sites and those of lunar outgassing correlate strongly, further arguing for the reality of TLPs.
Major earthquakes occur regularly on an isolated plate boundary fault.
Berryman, Kelvin R; Cochran, Ursula A; Clark, Kate J; Biasi, Glenn P; Langridge, Robert M; Villamor, Pilar
2012-06-29
The scarcity of long geological records of major earthquakes, on different types of faults, makes testing hypotheses of regular versus random or clustered earthquake recurrence behavior difficult. We provide a fault-proximal major earthquake record spanning 8000 years on the strike-slip Alpine Fault in New Zealand. Cyclic stratigraphy at Hokuri Creek suggests that the fault ruptured to the surface 24 times, and event ages yield a 0.33 coefficient of variation in recurrence interval. We associate this near-regular earthquake recurrence with a geometrically simple strike-slip fault, with high slip rate, accommodating a high proportion of plate boundary motion that works in isolation from other faults. We propose that it is valid to apply time-dependent earthquake recurrence models for seismic hazard estimation to similar faults worldwide.
Stochastic dynamic modeling of regular and slow earthquakes
Aso, N.; Ando, R.; Ide, S.
2017-12-01
Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal
Clustered and transient earthquake sequences in mid-continents
Liu, M.; Stein, S. A.; Wang, H.; Luo, G.
2012-12-01
Earthquakes result from sudden release of strain energy on faults. On plate boundary faults, strain energy is constantly accumulating from steady and relatively rapid relative plate motion, so large earthquakes continue to occur so long as motion continues on the boundary. In contrast, such steady accumulation of stain energy does not occur on faults in mid-continents, because the far-field tectonic loading is not steadily distributed between faults, and because stress perturbations from complex fault interactions and other stress triggers can be significant relative to the slow tectonic stressing. Consequently, mid-continental earthquakes are often temporally clustered and transient, and spatially migrating. This behavior is well illustrated by large earthquakes in North China in the past two millennia, during which no single large earthquakes repeated on the same fault segments, but moment release between large fault systems was complementary. Slow tectonic loading in mid-continents also causes long aftershock sequences. We show that the recent small earthquakes in the Tangshan region of North China are aftershocks of the 1976 Tangshan earthquake (M 7.5), rather than indicators of a new phase of seismic activity in North China, as many fear. Understanding the transient behavior of mid-continental earthquakes has important implications for assessing earthquake hazards. The sequence of large earthquakes in the New Madrid Seismic Zone (NMSZ) in central US, which includes a cluster of M~7 events in 1811-1812 and perhaps a few similar ones in the past millennium, is likely a transient process, releasing previously accumulated elastic strain on recently activated faults. If so, this earthquake sequence will eventually end. Using simple analysis and numerical modeling, we show that the large NMSZ earthquakes may be ending now or in the near future.
Regularized inversion of controlled source and earthquake data
International Nuclear Information System (INIS)
Ramachandran, Kumar
2012-01-01
Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)
Nonlinear waves in earth crust faults: application to regular and slow earthquakes
Gershenzon, Naum; Bambakidis, Gust
2015-04-01
The genesis, development and cessation of regular earthquakes continue to be major problems of modern geophysics. How are earthquakes initiated? What factors determine the rapture velocity, slip velocity, rise time and geometry of rupture? How do accumulated stresses relax after the main shock? These and other questions still need to be answered. In addition, slow slip events have attracted much attention as an additional source for monitoring fault dynamics. Recently discovered phenomena such as deep non-volcanic tremor (NVT), low frequency earthquakes (LFE), very low frequency earthquakes (VLF), and episodic tremor and slip (ETS) have enhanced and complemented our knowledge of fault dynamic. At the same time, these phenomena give rise to new questions about their genesis, properties and relation to regular earthquakes. We have developed a model of macroscopic dry friction which efficiently describes laboratory frictional experiments [1], basic properties of regular earthquakes including post-seismic stress relaxation [3], the occurrence of ambient and triggered NVT [4], and ETS events [5, 6]. Here we will discuss the basics of the model and its geophysical applications. References [1] Gershenzon N.I. & G. Bambakidis (2013) Tribology International, 61, 11-18, http://dx.doi.org/10.1016/j.triboint.2012.11.025 [2] Gershenzon, N.I., G. Bambakidis and T. Skinner (2014) Lubricants 2014, 2, 1-x manuscripts; doi:10.3390/lubricants20x000x; arXiv:1411.1030v2 [3] Gershenzon N.I., Bykov V. G. and Bambakidis G., (2009) Physical Review E 79, 056601 [4] Gershenzon, N. I, G. Bambakidis, (2014a), Bull. Seismol. Soc. Am., 104, 4, doi: 10.1785/0120130234 [5] Gershenzon, N. I.,G. Bambakidis, E. Hauser, A. Ghosh, and K. C. Creager (2011), Geophys. Res. Lett., 38, L01309, doi:10.1029/2010GL045225. [6] Gershenzon, N.I. and G. Bambakidis (2014) Bull. Seismol. Soc. Am., (in press); arXiv:1411.1020
Directory of Open Access Journals (Sweden)
Timofei K. Zlobin
2012-01-01
Full Text Available The catastrophic Simushir earthquake occurred on 15 November 2006 in the Kuril-Okhotsk region in the Middle Kuril Islands which is a transition zone between the Eurasian continent and the Pacific Ocean. It was followed by numerous strong earthquakes. It is established that the catastrophic earthquake was prepared on a site characterized by increased relative effective pressures which is located at the border of the low-pressure area (Figure 1.Based on data from GlobalCMT (Harvard, earthquake focal mechanisms were reconstructed, and tectonic stresses, the seismotectonic setting and the earthquakes distribution pattern were studied for analysis of the field of stresses in the region before to the Simushir earthquake (Figures 2 and 3; Table 1.Five areas of various types of movement were determined. Three of them are stretched along the Kuril Islands. It is established that seismodislocations in earthquake focal areas are regularly distributed. In each of the determined areas, displacements of a specific type (shear or reverse shear are concentrated and give evidence of the alteration and change of zones characterized by horizontal stretching and compression.The presence of the horizontal stretching and compression zones can be explained by a model of subduction (Figure 4. Detailed studies of the state of stresses of the Kuril region confirm such zones (Figure 5. Recent GeodynamicsThe established specific features of tectonic stresses before the catastrophic Simushir earthquake of 15 November 2006 contribute to studies of earthquake forecasting problems. The state of stresses and the geodynamic conditions suggesting occurrence of new earthquakes can be assessed from the data on the distribution of horizontal compression, stretching and shear areas of the Earth’s crust and the upper mantle in the Kuril region.
An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...
Imbricated slip rate processes during slow slip transients imaged by low-frequency earthquakes
Lengliné, O.; Frank, W.; Marsan, D.; Ampuero, J. P.
2017-12-01
Low Frequency Earthquakes (LFEs) often occur in conjunction with transient strain episodes, or Slow Slip Events (SSEs), in subduction zones. Their focal mechanism and location consistent with shear failure on the plate interface argue for a model where LFEs are discrete dynamic ruptures in an otherwise slowly slipping interface. SSEs are mostly observed by surface geodetic instruments with limited resolution and it is likely that only the largest ones are detected. The time synchronization of LFEs and SSEs suggests that we could use the recorded LFEs to constrain the evolution of SSEs, and notably of the geodetically-undetected small ones. However, inferring slow slip rate from the temporal evolution of LFE activity is complicated by the strong temporal clustering of LFEs. Here we apply dedicated statistical tools to retrieve the temporal evolution of SSE slip rates from the time history of LFE occurrences in two subduction zones, Mexico and Cascadia, and in the deep portion of the San Andreas fault at Parkfield. We find temporal characteristics of LFEs that are similar across these three different regions. The longer term episodic slip transients present in these datasets show a slip rate decay with time after the passage of the SSE front possibly as t-1/4. They are composed of multiple short term transients with steeper slip rate decay as t-α with α between 1.4 and 2. We also find that the maximum slip rate of SSEs has a continuous distribution. Our results indicate that creeping faults host intermittent deformation at various scales resulting from the imbricated occurrence of numerous slow slip events of various amplitudes.
Directory of Open Access Journals (Sweden)
C. D. Reddy
2012-02-01
Full Text Available Throughout the world, the tsunami generation potential of some large under-sea earthquakes significantly contributes to regional seismic hazard, which gives rise to significant risk in the near-shore provinces where human settlements are in sizeable population, often referred to as coastal seismic risk. In this context, we show from the pertinent GPS data that the transient stresses generated by the viscoelastic relaxation process taking place in the mantle is capable of rupturing major faults by stress transfer from the mantle through the lower crust including triggering additional rupture on the other major faults. We also infer that postseismic relaxation at relatively large depths can push some of the fault segments to reactivation causing failure sequences. As an illustration to these effects, we consider in detail the earthquake sequence comprising six events, starting from the main event of M_{w} = 7.5, on 10 August 2009 and tapering off to a small earthquake of M_{w} = 4.5 on 2 February 2011 over a period of eighteen months in the intensely seismic Andaman Islands between India and Myanmar. The persisting transient stresses, spatio-temporal seismic pattern, modeled Coulomb stress changes, and the southward migration of earthquake activity has increased the probability of moderate earthquakes recurring in the northern Andaman region, particularly closer to or somewhat south of Diglipur.
Directory of Open Access Journals (Sweden)
M. R. Varley
2007-11-01
Full Text Available The study of the Earth's electromagnetic fields prior to the occurrence of strong seismic events has repeatedly revealed cases were transient anomalies, often deemed as possible earthquake precursors, were observed on electromagnetic field recordings of surface, atmosphere and near space carried out measurements. In an attempt to understand the nature of such signals several models have been proposed based upon the exhibited characteristics of the observed anomalies and different possible generation mechanisms, with electric earthquake precursors (EEP appearing to be the main candidates for short-term earthquake precursors. This paper discusses the detection of a ULF electric field transient anomaly and its identification as a possible electric earthquake precursor accompanying the Kythira M=6.9 earthquake occurred on the 8 January 2006.
Thatcher, W. R.; Pollitz, F.
2009-12-01
Here we review the state of knowledge of postseismic deformation, discuss outstanding problems, and suggest new research directions. Although space geodetic methods are providing unprecedented detail on post-earthquake movements, the duration of transient motion is poorly constrained, its mechanisms are disputed, and its effect on cycle-long deformation is uncertain. Detectible transients are well-documented for at least a year following most M>6 earthquakes and for as long as 10-50 years following M>7 events. They probably persist for even longer times at smaller amplitudes. However, in general, observations provide no information on relaxation times longer than the measurement interval and transient movements longer than ~50 years are essentially unconstrained. Transients due to poro-elastic relaxation and aseismic fault slip or localized ductile shearing concentrate close to the earthquake rupture and probably have time constants of a few years or less; causative mechanisms have only been treated approximately and the actual micro-mechanical processes have not yet been comprehensively modeled and compared with observations. There is an emerging consensus that transient ductile flow follows large earthquakes, continues for decades or longer, and occurs primarily in the uppermost mantle rather than the lower crust. The depth distribution of effective viscosity is poorly constrained. Analysis of the resolving power of the post-1999 Hector Mine earthquake GPS dataset (Pollitz & Thatcher, this meeting) shows that only viscosities in the uppermost ~40 km of the mantle are resolved and at most only two independent parameters can be estimated. It is uncertain whether transient upper mantle ductile flow accurately follows strain rate dependent power law behavior and effective viscosity decreases ~exponentially with depth as predicted by laboratory results. Late cycle interseismic deformation patterns provide potentially decisive constraints and can indirectly supply
The evolving interaction of low-frequency earthquakes during transient slip.
Frank, William B; Shapiro, Nikolaï M; Husker, Allen L; Kostoglodov, Vladimir; Gusev, Alexander A; Campillo, Michel
2016-04-01
Observed along the roots of seismogenic faults where the locked interface transitions to a stably sliding one, low-frequency earthquakes (LFEs) primarily occur as event bursts during slow slip. Using an event catalog from Guerrero, Mexico, we employ a statistical analysis to consider the sequence of LFEs at a single asperity as a point process, and deduce the level of time clustering from the shape of its autocorrelation function. We show that while the plate interface remains locked, LFEs behave as a simple Poisson process, whereas they become strongly clustered in time during even the smallest slow slip, consistent with interaction between different LFE sources. Our results demonstrate that bursts of LFEs can result from the collective behavior of asperities whose interaction depends on the state of the fault interface.
DEFF Research Database (Denmark)
Hoechner, Andreas; Sobolev, Stephan V.; Einarsson, Indriði
2011-01-01
or postseismic relaxation, leads to difficulties in finding a consistent interpretation of obtained viscosities. Using standard Maxwell viscosity of 1e19 Pa s to analyze postseismic near-field GPS time series from the 2004 Sumatra-Andaman earthquake requires large time-dependent afterslip with a relaxation time...... Maxwell model with afterslip is not compatible with observations, since even large afterslip has a more localized effect than transient relaxation due to the main earthquake, which in turn is in agreement with observations. Thus, a combination of ground-and space-based geodetic observations is very useful...
Ouzounov, D.; Pulinets, S. A.; Hernandez-Pajares, M.; Alberto Garcia Rigo, A. G.; Davidenko, D.; Hatzopoulos, N.; Kafatos, M.
2015-12-01
The recent M7.8 Nepal earthquake of April 25, 2015 was the largest recorded earthquake event to hit this nation since 1934. We prospectively and retrospectively analyzed the transient variations of three different physical parameters - outgoing earth radiation (OLR), GPS/TEC and the thermodynamic proprieties in the lower atmosphere. These changes characterize the state of the atmosphere and ionosphere several days before the onset of this earthquake. Our preliminary results show that in mid March 2015 a rapid increase of emitted infrared radiation was observed from the satellite data and an anomaly near the epicenter reached the maximum on April 21-22. The ongoing analysis of satellite radiation revealed another transient anomaly on May 3th, probably associated with the M7.3 of May 12, 2015. The analysis of air temperature form ground stations show similar patterns of rapid increases offset 1-2 days earlier to the satellite transient anomalies.The GPS/TEC data indicate an increase and variation in electron density reaching a maximum value during April 22-24. We found a strong negative TEC anomaly in the crest of EIA (Equatorial Ionospheric Anomaly) on April 21st and strong positive on April 24th, 2015. Our results show strong ionospheric effects not only in the changes of the EIA intensity but also within the latitudinal movements of the crests of EIA.
Directory of Open Access Journals (Sweden)
S. I. Sherman
2015-01-01
gradually, yet do not disappear. (4 In the western regions of Central Asia, the recurrence time of strong earthquakes is about 25 years. It correlates with the regular activation of the seismic process in Asia which is mani-fested in almost the same time intervals; a recurrence time of a strong earthquake controlled by a specific active fault exceeds seems 100–250 years. (5 Mechanisms of all the strong earthquakes contain a slip component that is often accompanied by a compression component. The slip component corresponds to shearing along the faults revealed by geological methods, i.e. correlates with rock mass displacements in the near-fault medium. (6 GPS geodetic meas-urements show that shearing develops in the NW direction in the Tibet. Further northward, the direction changes to the sublatitudinal one. At the boundary of ~105°E, southward of 30°N, the slip vectors attain the SE direction. Further southward of 20°N, at the eastern edge of the Himalayan thrust, the slip vectors again attain the sublatitudinal direc-tion. High velocities/rates of recent crust movements are typical of the Tibet region. (7 The NW direction is typical of the opposite vectors related to the Pacific subduction zone. The resultant of the NE and NW vectors provides for the right-lateral displacement of the rocks in the submeridional border zone. (8 The geodynamic zones around the cen-tral zone (wherein the strong earthquakes are located are significantly less geodynamically active and thus facilitate the accumulation of compression stresses in the central zone, providing for the transition of rocks to the quazi-plastic state and even flow. This is the principal feature distinguishing the region, wherein the strong earthquakes are loca-ted, from its neighboring areas. In Central Asia, the structural positions of recent strong earthquakes are determined with respect to the following factors: (1 the western regions separated in the studied territory; (2 the larger thickness of the crust in
Rockwell, T. K.
2010-12-01
A long paleoseismic record at Hog Lake on the central San Jacinto fault (SJF) in southern California documents evidence for 18 surface ruptures in the past 3.8-4 ka. This yields a long-term recurrence interval of about 210 years, consistent with its slip rate of ~16 mm/yr and field observations of 3-4 m of displacement per event. However, during the past 3800 years, the fault has switched from a quasi-periodic mode of earthquake production, during which the recurrence interval is similar to the long-term average, to clustered behavior with the inter-event periods as short as a few decades. There are also some periods as long as 450 years during which there were no surface ruptures, and these periods are commonly followed by one to several closely-timed ruptures. The coefficient of variation (CV) for the timing of these earthquakes is about 0.6 for the past 4000 years (17 intervals). Similar behavior has been observed on the San Andreas Fault (SAF) south of the Transverse Ranges where clusters of earthquakes have been followed by periods of lower seismic production, and the CV is as high as 0.7 for some portions of the fault. In contrast, the central North Anatolian Fault (NAF) in Turkey, which ruptured in 1944, appears to have produced ruptures with similar displacement at fairly regular intervals for the past 1600 years. With a CV of 0.16 for timing, and close to 0.1 for displacement, the 1944 rupture segment near Gerede appears to have been both periodic and characteristic. The SJF and SAF are part of a broad plate boundary system with multiple parallel strands with significant slip rates. Additional faults lay to the east (Eastern California shear zone) and west (faults of the LA basin and southern California Borderland), which makes the southern SAF system a complex and broad plate boundary zone. In comparison, the 1944 rupture section of the NAF is simple, straight and highly localized, which contrasts with the complex system of parallel faults in southern
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data
Directory of Open Access Journals (Sweden)
J. B. Fletcher
1994-06-01
Full Text Available he M 7.4 Landers earthquake triggered widespread seismicity in the Western U.S. Because the transient dynamic stresses induced at regional distances by the Landers surface waves are much larger than the expected static stresses, the magnitude and the characteristics of the dynamic stresses may bear upon the earthquake triggering mechanism. The Landers earthquake was recorded on the UPSAR array, a group of 14 triaxial accelerometers located within a 1-square-km region 10 km southwest of the town of Parkfield, California, 412 km northwest of the Landers epicenter. We used a standard geodetic inversion procedure to determine the surface strain and stress tensors as functions of time from the observed dynamic displacements. Peak dynamic strains and stresses at the Earth's surface are about 7 microstrain and 0.035 MPa, respectively, and they have a flat amplitude spectrum between 2 s and 15 s period. These stresses agree well with stresses predicted from a simple rule of thumb based upon the ground velocity spectrum observed at a single station. Peak stresses ranged from about 0.035 MPa at the surface to about 0.12 MPa between 2 and 14 km depth, with the sharp increase of stress away from the surface resulting from the rapid increase of rigidity with depth and from the influence of surface wave mode shapes. Comparison of Landers-induced static and dynamic stresses at the hypocenter of the Big Bear aftershock provides a clear example that faults are stronger on time scales of tens of seconds than on time scales of hours or longer.
Lamont, E. A.; Lewis, J.; Byrne, T. B.; Crespi, J. M.; Rau, R.
2010-12-01
Modeling of earthquake focal mechanisms and coseismic GPS data from an area at the southern tip of the 1999 Chi Chi rupture suggests the existence of an evolving upper plate tear. The earthquakes occur in what we refer to as the Luliao seismic zone and define a steeply northeast-dipping tabular volume that extends from the surface to approximately 11 km. We find that the focal mechanisms from the six-month period following the 1999 Chi-Chi Earthquake yield best-fitting strain tensors that suggest the dominance of strike-slip faulting. Our strain inversions, using a micropolar continuum model, reveal orogen-perpendicular (NW-SE) minimum stretching (i.e., shortening) and orogen-parallel (NE-SW) maximum stretching. Additionally, our inversions indicate plane strain with positive, non-zero relative vorticity values, suggestive of counter-clockwise (map view) block rotations. Published coseismic GPS data provide additional evidence that this tabular volume of crust is the locus of strike-slip faulting accompanied by block rotation. Preliminary 2D strain inversions for GPS stations that span the inverted focal mechanisms reveal negative (counterclockwise) rotation values and principal strain axes that are generally consistent with our focal mechanism inversions. We interpret our findings to reflect an accommodation zone that is activated by differential westward expansion of the foreland fold and thrust belt. In particular, this zone separates an area of greater westward propagation near Taichung from an area of lesser propagation to the south near Chiayi. Differential expansion of the orogen appears to be influenced by an eastward pointing, lower-plate promontory south of the Sanyi-Puli seismic zone. Unlike the Luliao events, the Sanyi-Puli seismic zone extends from the near surface to approximately 30 km and we have interpreted it as a reactivated continental margin fracture zone inherited from South China Sea rifting. The lower-plate promontory is coincident with the
Peterson, Anna; Carlfjord, Siw; Schaller, Anne; Gerdle, Björn; Larsson, Britt
2017-07-01
Systematic and regular pain assessment has been shown to improve pain management. Well-functioning pain assessments require using strategies informed by well-established theory. This study evaluates documented pain assessments reported in medical records and by patients, including reassessment using a Numeric Rating Scale (NRS) after patients receive rescue medication. Documentation surveys (DS) and patient surveys (PS) were performed at baseline (BL), after six months, and after 12 months in 44 in-patient wards at the three hospitals in Östergötland County, Sweden. Nurses and nurse assistants received training on pain assessment and support. The Knowledge to Action Framework guided the implementation of new routines. According to DS pain assessment using NRS, pain assessment increased significantly: from 7% at baseline to 36% at 12 months (peducation and support strategies, systematic pain assessment increased, an encouraging finding considering the complex contexts of in-patient facilities. However, the achieved assessment levels and especially reassessments related to rescue medication were clinically unsatisfactory. Future studies should include nursing staff and physicians and increase interactivity such as providing online education support. A discrepancy between documented and reported reassessment in association with given rescue medication might indicate that nurses need better ways to provide pain relief. The fairly low level of patient-reported pain via NRS and documented use of NRS before and 12 months after the educational programme stresses the need for education on pain management in nursing education. Implementations differing from traditional educational attempts such as interactive implementations might complement educational programmes given at the work place. Standardized routines for pain management that include the possibility for nurses to deliver pain medication within well-defined margins might improve pain management and increase the use
Liu, Y.; Rice, J. R.
2005-12-01
In 3D modeling of long tectonic loading and earthquake sequences on a shallow subduction fault [Liu and Rice, 2005], with depth-variable rate and state friction properties, we found that aseismic transient slip episodes emerge spontaneously with only a simplified representation of effects of metamorphic fluid release. That involved assumption of a constant in time but uniformly low effective normal stress in the downdip region. As suggested by observations in several major subduction zones [Obara, 2002; Rogers and Dragert, 2003; Kodaira et al, 2004], the presence of fluids, possibly released from dehydration reactions beneath the seismogenic zone, and their pressurization within the fault zone may play an important role in causing aseismic transients and associated non-volcanic tremors. To investigate the effects of fluids in the subduction zone, particularly on the generation of aseismic transients and their various features, we develop a more complete physical description of the pore pressure evolution (specifically, pore pressure increase due to supply from dehydration reactions and shear heating, decrease due to transport and dilatancy during slip), and incorporate that into the rate and state based 3D modeling. We first incorporated two important factors, dilatancy and shear heating, following Segall and Rice [1995, 2004] and Taylor [1998]. In the 2D simulations (slip varies with depth only), a dilatancy-stabilizing effect is seen which slows down the seismic rupture front and can prevent rapid slip from extending all the way to the trench, similarly to Taylor [1998]. Shear heating increases the pore pressure, and results in faster coseismic rupture propagation and larger final slips. In the 3D simulations, dilatancy also stabilizes the along-strike rupture propagation of both seismic and aseismic slips. That is, aseismic slip transients migrate along the strike faster with a shorter Tp (the characteristic time for pore pressure in the fault core to re
Tung, S.; Masterlark, T.; Dovovan, T.
2018-05-01
The large M7.3 aftershock occurred 17 days after the 2015 M7.8 Gorkha earthquake. We investigate if this sequence is mechanically favored by the mainshock via time-dependent fluid migration and pore pressure recovery. This study uses finite element models of fully-coupled poroelastic coseismic and postseismic behavior to simulate the evolving stress and pore-pressure fields. Using simulations of a reasonable permeability, the hypocenter was destabilized by an additional 0.15 MPa of Coulomb failure stress change (ΔCFS) and 0.17 MPa of pore pressure (Δp), the latter of which induced lateral and upward diffusive fluid flow (up to 2.76 mm/day) in the aftershock region. The M7.3 location is predicted next to a local maximum of Δp and a zone of positive ΔCFS northeast of Kathmandu. About 60% of the aftershocks occurred within zones having either Δp > 0 or ΔCFS > 0. Particularly in the eastern flank of the epicentral area, 83% of the aftershocks experienced postseismic fluid pressurization and 88% of them broke out with positive pore pressure, which are discernibly more than those with positive ΔCFS (71%). The transient scalar field of fluid pressurization provides a good proxy to predict aftershock-prone areas in space and time, because it does not require extraction of an assumed vector field from transient stress tensor fields as is the case for ΔCFS calculations. A bulk permeability of 8.32 × 10-18 m2 is resolved to match the transient response and the timing of the M7.3 rupture which occurred at the peak of the ΔCFS time-series. This estimate is consistent with the existing power-law permeability-versus-depth models, suggesting an intermediately-fractured upper crust coherent with the local geology of the central Himalayas. The contribution of poroelastic triggering is verified against different poroelastic moduli and surface flow-pressure boundaries, suggesting that a poroelastic component is essential to account for the time interval separating the
Recurrent slow slip event likely hastened by the 2011 Tohoku earthquake
Hirose, Hitoshi; Kimura, Hisanori; Enescu, Bogdan; Aoi, Shin
2012-01-01
Slow slip events (SSEs) are another mode of fault deformation than the fast faulting of regular earthquakes. Such transient episodes have been observed at plate boundaries in a number of subduction zones around the globe. The SSEs near the Boso Peninsula, central Japan, are among the most documented SSEs, with the longest repeating history, of almost 30 y, and have a recurrence interval of 5 to 7 y. A remarkable characteristic of the slow slip episodes is the accompanying earthquake swarm activity. Our stable, long-term seismic observations enable us to detect SSEs using the recorded earthquake catalog, by considering an earthquake swarm as a proxy for a slow slip episode. Six recurrent episodes are identified in this way since 1982. The average duration of the SSE interoccurrence interval is 68 mo; however, there are significant fluctuations from this mean. While a regular cycle can be explained using a simple physical model, the mechanisms that are responsible for the observed fluctuations are poorly known. Here we show that the latest SSE in the Boso Peninsula was likely hastened by the stress transfer from the March 11, 2011 great Tohoku earthquake. Moreover, a similar mechanism accounts for the delay of an SSE in 1990 by a nearby earthquake. The low stress buildups and drops during the SSE cycle can explain the strong sensitivity of these SSEs to stress transfer from external sources. PMID:22949688
DEFF Research Database (Denmark)
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...
International Nuclear Information System (INIS)
Ward, P.L.
1978-01-01
The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures
UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA
Directory of Open Access Journals (Sweden)
IONIŢĂ Elena
2015-06-01
Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.
International Nuclear Information System (INIS)
Hofmann, R.B.
1995-01-01
Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed. A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository
Coordinate-invariant regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-01-01
A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc
Gardonio, B.; Marsan, D.; Socquet, A.; Bouchon, M.; Jara, J.; Sun, Q.; Cotte, N.; Campillo, M.
2018-02-01
Slow slip events (SSEs) regularly occur near the Boso Peninsula, central Japan. Their time of recurrence has been decreasing from 6.4 to 2.2 years from 1996 to 2014. It is important to better constrain the slip history of this area, especially as models show that the recurrence intervals could become shorter prior to the occurrence of a large interplate earthquake nearby. We analyze the seismic waveforms of more than 2,900 events (M≥1.0) taking place in the Boso Peninsula, Japan, from 1 April 2004 to 4 November 2015, calculating the correlation and the coherence between each pair of events in order to define groups of repeating earthquakes. The cumulative number of repeating earthquakes suggests the existence of two slow slip events that have escaped detection so far. Small transient displacements observed in the time series of nearby GPS stations confirm these results. The detection scheme coupling repeating earthquakes and GPS analysis allow to detect small SSEs that were not seen before by classical methods. This work brings new information on the diversity of SSEs and demonstrates that the SSEs in Boso area present a more complex history than previously considered.
Bartlow, N. M.
2017-12-01
Slow Earthquake Hunters is a new citizen science project to detect, catalog, and monitor slow slip events. Slow slip events, also called "slow earthquakes", occur when faults slip too slowly to generate significant seismic radiation. They typically take between a few days and over a year to occur, and are most often found on subduction zone plate interfaces. While not dangerous in and of themselves, recent evidence suggests that monitoring slow slip events is important for earthquake hazards, as slow slip events have been known to trigger damaging "regular" earthquakes. Slow slip events, because they do not radiate seismically, are detected with a variety of methods, most commonly continuous geodetic Global Positioning System (GPS) stations. There is now a wealth of GPS data in some regions that experience slow slip events, but a reliable automated method to detect them in GPS data remains elusive. This project aims to recruit human users to view GPS time series data, with some post-processing to highlight slow slip signals, and flag slow slip events for further analysis by the scientific team. Slow Earthquake Hunters will begin with data from the Cascadia subduction zone, where geodetically detectable slow slip events with a duration of at least a few days recur at regular intervals. The project will then expand to other areas with slow slip events or other transient geodetic signals, including other subduction zones, and areas with strike-slip faults. This project has not yet rolled out to the public, and is in a beta testing phase. This presentation will show results from an initial pilot group of student participants at the University of Missouri, and solicit feedback for the future of Slow Earthquake Hunters.
Connecting slow earthquakes to huge earthquakes
Obara, Kazushige; Kato, Aitaro
2016-01-01
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...
ASSESSMENT OF EARTHQUAKE HAZARDS ON WASTE LANDFILLS
DEFF Research Database (Denmark)
Zania, Varvara; Tsompanakis, Yiannis; Psarropoulos, Prodromos
Earthquake hazards may arise as a result of: (a) transient ground deformation, which is induced due to seismic wave propagation, and (b) permanent ground deformation, which is caused by abrupt fault dislocation. Since the adequate performance of waste landfills after an earthquake is of outmost...... importance, the current study examines the impact of both types of earthquake hazards by performing efficient finite-element analyses. These took also into account the potential slip displacement development along the geosynthetic interfaces of the composite base liner. At first, the development of permanent...
... North Dakota, and Wisconsin. The core of the earth was the first internal structural element to be identified. In 1906 R.D. Oldham discovered it from his studies of earthquake records. The inner core is solid, and the outer core is liquid and so does not transmit ...
Davis, Amanda; Gray, Ron
2018-01-01
December 26, 2004 was one of the deadliest days in modern history, when a 9.3 magnitude earthquake--the third largest ever recorded--struck off the coast of Sumatra in Indonesia (National Centers for Environmental Information 2014). The massive quake lasted at least 10 minutes and devastated the Indian Ocean. The quake displaced an estimated…
Geological and historical evidence of irregular recurrent earthquakes in Japan.
Satake, Kenji
2015-10-28
Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).
van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime
2016-01-01
This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,
Nijholt, Antinus
1980-01-01
Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular
International Nuclear Information System (INIS)
Muir, M.D.
1975-01-01
The design and design philosophy of a high performance, extremely versatile transient analyzer is described. This sub-system was designed to be controlled through the data acquisition computer system which allows hands off operation. Thus it may be placed on the experiment side of the high voltage safety break between the experimental device and the control room. This analyzer provides control features which are extremely useful for data acquisition from PPPL diagnostics. These include dynamic sample rate changing, which may be intermixed with multiple post trigger operations with variable length blocks using normal, peak to peak or integrate modes. Included in the discussion are general remarks on the advantages of adding intelligence to transient analyzers, a detailed description of the characteristics of the PPPL transient analyzer, a description of the hardware, firmware, control language and operation of the PPPL transient analyzer, and general remarks on future trends in this type of instrumentation both at PPPL and in general
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Connecting slow earthquakes to huge earthquakes.
Obara, Kazushige; Kato, Aitaro
2016-07-15
Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.
Stein, R. S.
2012-12-01
The 2004 M=9.2 Sumatra earthquake claimed what seemed an unfathomable 228,000 lives, although because of its size, we could at least assure ourselves that it was an extremely rare event. But in the short space of 8 years, the Sumatra quake no longer looks like an anomaly, and it is no longer even the worst disaster of the Century: 80,000 deaths in the 2005 M=7.6 Pakistan quake; 88,000 deaths in the 2008 M=7.9 Wenchuan, China quake; 316,000 deaths in the M=7.0 Haiti, quake. In each case, poor design and construction were unable to withstand the ferocity of the shaken earth. And this was compounded by inadequate rescue, medical care, and shelter. How could the toll continue to mount despite the advances in our understanding of quake risk? The world's population is flowing into megacities, and many of these migration magnets lie astride the plate boundaries. Caught between these opposing demographic and seismic forces are 50 cities of at least 3 million people threatened by large earthquakes, the targets of chance. What we know for certain is that no one will take protective measures unless they are convinced they are at risk. Furnishing that knowledge is the animating principle of the Global Earthquake Model, launched in 2009. At the very least, everyone should be able to learn what his or her risk is. At the very least, our community owes the world an estimate of that risk. So, first and foremost, GEM seeks to raise quake risk awareness. We have no illusions that maps or models raise awareness; instead, earthquakes do. But when a quake strikes, people need a credible place to go to answer the question, how vulnerable am I, and what can I do about it? The Global Earthquake Model is being built with GEM's new open source engine, OpenQuake. GEM is also assembling the global data sets without which we will never improve our understanding of where, how large, and how frequently earthquakes will strike, what impacts they will have, and how those impacts can be lessened by
Earthquake Early Warning Systems
Pei-Yang Lin
2011-01-01
Because of Taiwan’s unique geographical environment, earthquake disasters occur frequently in Taiwan. The Central Weather Bureau collated earthquake data from between 1901 and 2006 (Central Weather Bureau, 2007) and found that 97 earthquakes had occurred, of which, 52 resulted in casualties. The 921 Chichi Earthquake had the most profound impact. Because earthquakes have instant destructive power and current scientific technologies cannot provide precise early warnings in advance, earthquake ...
Regularization by External Variables
DEFF Research Database (Denmark)
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Regularities of Multifractal Measures
Indian Academy of Sciences (India)
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...
Stochastic analytic regularization
International Nuclear Information System (INIS)
Alfaro, J.
1984-07-01
Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)
Efficient multidimensional regularization for Volterra series estimation
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Improvements in GRACE Gravity Fields Using Regularization
Save, H.; Bettadpur, S.; Tapley, B. D.
2008-12-01
The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or
Sparse structure regularized ranking
Wang, Jim Jing-Yan; Sun, Yijun; Gao, Xin
2014-01-01
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse
Regular expression containment
DEFF Research Database (Denmark)
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Supersymmetric dimensional regularization
International Nuclear Information System (INIS)
Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.
1980-01-01
There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Tectonic feedback and the earthquake cycle
Lomnitz, Cinna
1985-09-01
The occurrence of cyclical instabilities along plate boundaries at regular intervals suggests that the process of earthquake causation differs in some respects from the model of elastic rebound in its simplest forms. The model of tectonic feedback modifies the concept of this original model in that it provides a physical interaction between the loading rate and the state of strain on the fault. Two examples are developed: (a) Central Chile, and (b) Mexico. The predictions of earthquake hazards for both types of models are compared.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2018-04-01
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-04-30
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Earthquakes: hydrogeochemical precursors
Ingebritsen, Steven E.; Manga, Michael
2014-01-01
Earthquake prediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.
Energy Technology Data Exchange (ETDEWEB)
Ts' ai, T H
1977-11-01
Chinese folk wisdom has long seen a relationship between ground water and earthquakes. Before an earthquake there is often an unusual change in the ground water level and volume of flow. Changes in the amount of particulate matter in ground water as well as changes in color, bubbling, gas emission, and noises and geysers are also often observed before earthquakes. Analysis of these features can help predict earthquakes. Other factors unrelated to earthquakes can cause some of these changes, too. As a first step it is necessary to find sites which are sensitive to changes in ground stress to be used as sensor points for predicting earthquakes. The necessary features are described. Recording of seismic waves of earthquake aftershocks is also an important part of earthquake predictions.
Ionospheric earthquake precursors
International Nuclear Information System (INIS)
Bulachenko, A.L.; Oraevskij, V.N.; Pokhotelov, O.A.; Sorokin, V.N.; Strakhov, V.N.; Chmyrev, V.M.
1996-01-01
Results of experimental study on ionospheric earthquake precursors, program development on processes in the earthquake focus and physical mechanisms of formation of various type precursors are considered. Composition of experimental cosmic system for earthquake precursors monitoring is determined. 36 refs., 5 figs
Children's Ideas about Earthquakes
Simsek, Canan Lacin
2007-01-01
Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…
Diverse Regular Employees and Non-regular Employment (Japanese)
MORISHIMA Motohiro
2011-01-01
Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
'Regular' and 'emergency' repair
International Nuclear Information System (INIS)
Luchnik, N.V.
1975-01-01
Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)
Regularization of divergent integrals
Felder, Giovanni; Kazhdan, David
2016-01-01
We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.
Regularizing portfolio optimization
International Nuclear Information System (INIS)
Still, Susanne; Kondor, Imre
2010-01-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Regular Single Valued Neutrosophic Hypergraphs
Directory of Open Access Journals (Sweden)
Muhammad Aslam Malik
2016-12-01
Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.
The geometry of continuum regularization
International Nuclear Information System (INIS)
Halpern, M.B.
1987-03-01
This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations
Synchronizing noisy nonidentical oscillators by transient uncoupling
Energy Technology Data Exchange (ETDEWEB)
Tandon, Aditya, E-mail: adityat@iitk.ac.in; Mannattil, Manu, E-mail: mmanu@iitk.ac.in [Department of Physics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh 208016 (India); Schröder, Malte, E-mail: malte@nld.ds.mpg.de [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Timme, Marc, E-mail: timme@nld.ds.mpg.de [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Department of Physics, Technical University of Darmstadt, 64289 Darmstadt (Germany); Chakraborty, Sagar, E-mail: sagarc@iitk.ac.in [Department of Physics, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh 208016 (India); Mechanics and Applied Mathematics Group, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh 208016 (India)
2016-09-15
Synchronization is the process of achieving identical dynamics among coupled identical units. If the units are different from each other, their dynamics cannot become identical; yet, after transients, there may emerge a functional relationship between them—a phenomenon termed “generalized synchronization.” Here, we show that the concept of transient uncoupling, recently introduced for synchronizing identical units, also supports generalized synchronization among nonidentical chaotic units. Generalized synchronization can be achieved by transient uncoupling even when it is impossible by regular coupling. We furthermore demonstrate that transient uncoupling stabilizes synchronization in the presence of common noise. Transient uncoupling works best if the units stay uncoupled whenever the driven orbit visits regions that are locally diverging in its phase space. Thus, to select a favorable uncoupling region, we propose an intuitive method that measures the local divergence at the phase points of the driven unit's trajectory by linearizing the flow and subsequently suppresses the divergence by uncoupling.
Crowdsourced earthquake early warning
Minson, Sarah E.; Brooks, Benjamin A.; Glennie, Craig L.; Murray, Jessica R.; Langbein, John O.; Owen, Susan E.; Heaton, Thomas H.; Iannucci, Robert A.; Hauser, Darren L.
2015-01-01
Earthquake early warning (EEW) can reduce harm to people and infrastructure from earthquakes and tsunamis, but it has not been implemented in most high earthquake-risk regions because of prohibitive cost. Common consumer devices such as smartphones contain low-cost versions of the sensors used in EEW. Although less accurate than scientific-grade instruments, these sensors are globally ubiquitous. Through controlled tests of consumer devices, simulation of an Mw (moment magnitude) 7 earthquake on California’s Hayward fault, and real data from the Mw 9 Tohoku-oki earthquake, we demonstrate that EEW could be achieved via crowdsourcing.
The 2008 Wenchuan Earthquake and the Rise and Fall of Earthquake Prediction in China
Chen, Q.; Wang, K.
2009-12-01
Regardless of the future potential of earthquake prediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquake prediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquake prediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less
Transient pseudohypoaldosteronism
Directory of Open Access Journals (Sweden)
Stajić Nataša
2011-01-01
Full Text Available Introduction. Infants with urinary tract malformations (UTM presenting with urinary tract infection (UTI are prone to develop transient type 1 pseudohypoaldosteronism (THPA1. Objective. Report on patient series with characteristics of THPA1, UTM and/or UTI and suggestions for the diagnosis and therapy. Methods. Patients underwent blood and urine electrolyte and acid-base analysis, serum aldosterosterone levels and plasma rennin activity measuring; urinalysis, urinoculture and renal ultrasound were done and medical and/or surgical therapy was instituted. Results. Hyponatraemia (120.9±5.8 mmol/L, hyperkalaemia (6.9±0.9 mmol/L, metabolic acidosis (plasma bicarbonate, 11±1.4 mmol/L, and a rise in serum creatinine levels (145±101 μmol/L were associated with inappropriately high urinary sodium (51.3±17.5 mmol/L and low potassium (14.1±5.9 mmol/L excretion. Elevated plasma aldosterone concentrations (170.4±100.5 ng/dL and the very high levels of the plasma aldosterone to potassium ratio (25.2±15.6 together with diminished urinary K/Na values (0.31±0.19 indicated tubular resistance to aldosterone. After institution of appropriate medical and/or surgical therapy, serum electrolytes, creatinine, and acid-base balance were normalized. Imaging studies showed ureteropyelic or ureterovesical junction obstruction in 3 and 2 patients, respectively, posterior urethral valves in 3, and normal UT in 1 patient. According to our knowledge, this is the first report on THPA1 in the Serbian literature. Conclusion. Male infants with hyponatraemia, hyperkalaemia and metabolic acidosis have to have their urine examined and the renal ultrasound has to be done in order to avoid both, the underdiagnosis of THPA1 and the inappropriate medication.
Earthquake forecasting and warning
Energy Technology Data Exchange (ETDEWEB)
Rikitake, T.
1983-01-01
This review briefly describes two other books on the same subject either written or partially written by Rikitake. In this book, the status of earthquake prediction efforts in Japan, China, the Soviet Union, and the United States are updated. An overview of some of the organizational, legal, and societal aspects of earthquake prediction in these countries is presented, and scientific findings of precursory phenomena are included. A summary of circumstances surrounding the 1975 Haicheng earthquake, the 1978 Tangshan earthquake, and the 1976 Songpan-Pingwu earthquake (all magnitudes = 7.0) in China and the 1978 Izu-Oshima earthquake in Japan is presented. This book fails to comprehensively summarize recent advances in earthquake prediction research.
Annotation of Regular Polysemy
DEFF Research Database (Denmark)
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...
Regularity of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht
2010-01-01
"Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t
Regularities of radiation heredity
International Nuclear Information System (INIS)
Skakov, M.K.; Melikhov, V.D.
2001-01-01
One analyzed regularities of radiation heredity in metals and alloys. One made conclusion about thermodynamically irreversible changes in structure of materials under irradiation. One offers possible ways of heredity transmittance of radiation effects at high-temperature transformations in the materials. Phenomenon of radiation heredity may be turned to practical use to control structure of liquid metal and, respectively, structure of ingot via preliminary radiation treatment of charge. Concentration microheterogeneities in material defect structure induced by preliminary irradiation represent the genetic factor of radiation heredity [ru
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Global observation of Omori-law decay in the rate of triggered earthquakes
Parsons, T.
2001-12-01
Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.
Encyclopedia of earthquake engineering
Kougioumtzoglou, Ioannis; Patelli, Edoardo; Au, Siu-Kui
2015-01-01
The Encyclopedia of Earthquake Engineering is designed to be the authoritative and comprehensive reference covering all major aspects of the science of earthquake engineering, specifically focusing on the interaction between earthquakes and infrastructure. The encyclopedia comprises approximately 265 contributions. Since earthquake engineering deals with the interaction between earthquake disturbances and the built infrastructure, the emphasis is on basic design processes important to both non-specialists and engineers so that readers become suitably well-informed without needing to deal with the details of specialist understanding. The content of this encyclopedia provides technically inclined and informed readers about the ways in which earthquakes can affect our infrastructure and how engineers would go about designing against, mitigating and remediating these effects. The coverage ranges from buildings, foundations, underground construction, lifelines and bridges, roads, embankments and slopes. The encycl...
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
Miller, G. J.
1976-01-01
The earthquake that struck the island of Guam on November 1, 1975, at 11:17 a.m had many unique aspects-not the least of which was the experience of an earthquake of 6.25 Richter magnitude while at 40 feet. My wife Bonnie, a fellow diver, Greg Guzman, and I were diving at Gabgab Beach in teh outer harbor of Apra Harbor, engaged in underwater phoyography when the earthquake struck.
Earthquakes and economic growth
Fisker, Peter Simonsen
2012-01-01
This study explores the economic consequences of earthquakes. In particular, it is investigated how exposure to earthquakes affects economic growth both across and within countries. The key result of the empirical analysis is that while there are no observable effects at the country level, earthquake exposure significantly decreases 5-year economic growth at the local level. Areas at lower stages of economic development suffer harder in terms of economic growth than richer areas. In addition,...
Effective field theory dimensional regularization
International Nuclear Information System (INIS)
Lehmann, Dirk; Prezeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed
Effective field theory dimensional regularization
Lehmann, Dirk; Prézeau, Gary
2002-01-01
A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.
Current interruption transients calculation
Peelo, David F
2014-01-01
Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,
2010-12-07
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...
Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity
Thomas, Abey E.
2018-05-01
Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.
Selection of regularization parameter for l1-regularized damage detection
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
OMG Earthquake! Can Twitter improve earthquake response?
Earle, P. S.; Guy, M.; Ostrum, C.; Horvath, S.; Buckmaster, R. A.
2009-12-01
The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information
Campbell, M. R.; Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.
2017-12-01
Much recent media attention focuses on Cascadia's earthquake hazard. A widely cited magazine article starts "An earthquake will destroy a sizable portion of the coastal Northwest. The question is when." Stories include statements like "a massive earthquake is overdue", "in the next 50 years, there is a 1-in-10 chance a "really big one" will erupt," or "the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three." These lead students to ask where the quoted probabilities come from and what they mean. These probability estimates involve two primary choices: what data are used to describe when past earthquakes happened and what models are used to forecast when future earthquakes will happen. The data come from a 10,000-year record of large paleoearthquakes compiled from subsidence data on land and turbidites, offshore deposits recording submarine slope failure. Earthquakes seem to have happened in clusters of four or five events, separated by gaps. Earthquakes within a cluster occur more frequently and regularly than in the full record. Hence the next earthquake is more likely if we assume that we are in the recent cluster that started about 1700 years ago, than if we assume the cluster is over. Students can explore how changing assumptions drastically changes probability estimates using easy-to-write and display spreadsheets, like those shown below. Insight can also come from baseball analogies. The cluster issue is like deciding whether to assume that a hitter's performance in the next game is better described by his lifetime record, or by the past few games, since he may be hitting unusually well or in a slump. The other big choice is whether to assume that the probability of an earthquake is constant with time, or is small immediately after one occurs and then grows with time. This is like whether to assume that a player's performance is the same from year to year, or changes over their career. Thus saying "the chance of
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
National Clearinghouse for Educational Facilities, 2008
2008-01-01
Earthquakes are low-probability, high-consequence events. Though they may occur only once in the life of a school, they can have devastating, irreversible consequences. Moderate earthquakes can cause serious damage to building contents and non-structural building systems, serious injury to students and staff, and disruption of building operations.…
2004-01-01
Following their request for help from members of international organisations, the permanent Mission of the Islamic Republic of Iran has given the following bank account number, where you can donate money to help the victims of the Bam earthquake. Re: Bam earthquake 235 - UBS 311264.35L Bubenberg Platz 3001 BERN
Tradable Earthquake Certificates
Woerdman, Edwin; Dulleman, Minne
2018-01-01
This article presents a market-based idea to compensate for earthquake damage caused by the extraction of natural gas and applies it to the case of Groningen in the Netherlands. Earthquake certificates give homeowners a right to yearly compensation for both property damage and degradation of living
Historic Eastern Canadian earthquakes
International Nuclear Information System (INIS)
Asmis, G.J.K.; Atchinson, R.J.
1981-01-01
Nuclear power plants licensed in Canada have been designed to resist earthquakes: not all plants, however, have been explicitly designed to the same level of earthquake induced forces. Understanding the nature of strong ground motion near the source of the earthquake is still very tentative. This paper reviews historical and scientific accounts of the three strongest earthquakes - St. Lawrence (1925), Temiskaming (1935), Cornwall (1944) - that have occurred in Canada in 'modern' times, field studies of near-field strong ground motion records and their resultant damage or non-damage to industrial facilities, and numerical modelling of earthquake sources and resultant wave propagation to produce accelerograms consistent with the above historical record and field studies. It is concluded that for future construction of NPP's near-field strong motion must be explicitly considered in design
Learning Earthquake Design and Construction–23. Why are ...
Indian Academy of Sciences (India)
RC shafts around the elevator core of buildings also act as shear walls, and should be taken advantage of to resist earthquake forces. Reinforcement Bars in RC Walls: Steel reinforcing bars are to be provided in walls in regularly spaced vertical and. ______ .AAAAA~ ______ __. RESONANCE I November 2005 v V V V V v ...
Turkish Compulsory Earthquake Insurance and "Istanbul Earthquake
Durukal, E.; Sesetyan, K.; Erdik, M.
2009-04-01
The city of Istanbul will likely experience substantial direct and indirect losses as a result of a future large (M=7+) earthquake with an annual probability of occurrence of about 2%. This paper dwells on the expected building losses in terms of probable maximum and average annualized losses and discusses the results from the perspective of the compulsory earthquake insurance scheme operational in the country. The TCIP system is essentially designed to operate in Turkey with sufficient penetration to enable the accumulation of funds in the pool. Today, with only 20% national penetration, and about approximately one-half of all policies in highly earthquake prone areas (one-third in Istanbul) the system exhibits signs of adverse selection, inadequate premium structure and insufficient funding. Our findings indicate that the national compulsory earthquake insurance pool in Turkey will face difficulties in covering incurring building losses in Istanbul in the occurrence of a large earthquake. The annualized earthquake losses in Istanbul are between 140-300 million. Even if we assume that the deductible is raised to 15%, the earthquake losses that need to be paid after a large earthquake in Istanbul will be at about 2.5 Billion, somewhat above the current capacity of the TCIP. Thus, a modification to the system for the insured in Istanbul (or Marmara region) is necessary. This may mean an increase in the premia and deductible rates, purchase of larger re-insurance covers and development of a claim processing system. Also, to avoid adverse selection, the penetration rates elsewhere in Turkey need to be increased substantially. A better model would be introduction of parametric insurance for Istanbul. By such a model the losses will not be indemnified, however will be directly calculated on the basis of indexed ground motion levels and damages. The immediate improvement of a parametric insurance model over the existing one will be the elimination of the claim processing
Earthquake recurrence models fail when earthquakes fail to reset the stress field
Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.
2012-01-01
Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.
Adaptive Regularization of Neural Classifiers
DEFF Research Database (Denmark)
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...
Transient drainage summary report
International Nuclear Information System (INIS)
1996-09-01
This report summarizes the history of transient drainage issues on the Uranium Mill Tailings Remedial Action (UMTRA) Project. It defines and describes the UMTRA Project disposal cell transient drainage process and chronicles UMTRA Project treatment of the transient drainage phenomenon. Section 4.0 includes a conceptual cross section of each UMTRA Project disposal site and summarizes design and construction information, the ground water protection strategy, and the potential for transient drainage
Earthquakes, November-December 1977
Person, W.J.
1978-01-01
Two major earthquakes occurred in the last 2 months of the year. A magnitude 7.0 earthquake struck San Juan Province, Argentina, on November 23, causing fatalities and damage. The second major earthquake was a magnitude 7.0 in the Bonin Islands region, an unpopulated area. On December 19, Iran experienced a destructive earthquake, which killed over 500.
Earthquakes, September-October 1986
Person, W.J.
1987-01-01
There was one great earthquake (8.0 and above) during this reporting period in the South Pacific in the Kermadec Islands. There were no major earthquakes (7.0-7.9) but earthquake-related deaths were reported in Greece and in El Salvador. There were no destrcutive earthquakes in the United States.
PSH Transient Simulation Modeling
Energy Technology Data Exchange (ETDEWEB)
Muljadi, Eduard [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-12-21
PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.
Earthquake hazard assessment and small earthquakes
International Nuclear Information System (INIS)
Reiter, L.
1987-01-01
The significance of small earthquakes and their treatment in nuclear power plant seismic hazard assessment is an issue which has received increased attention over the past few years. In probabilistic studies, sensitivity studies showed that the choice of the lower bound magnitude used in hazard calculations can have a larger than expected effect on the calculated hazard. Of particular interest is the fact that some of the difference in seismic hazard calculations between the Lawrence Livermore National Laboratory (LLNL) and Electric Power Research Institute (EPRI) studies can be attributed to this choice. The LLNL study assumed a lower bound magnitude of 3.75 while the EPRI study assumed a lower bound magnitude of 5.0. The magnitudes used were assumed to be body wave magnitudes or their equivalents. In deterministic studies recent ground motion recordings of small to moderate earthquakes at or near nuclear power plants have shown that the high frequencies of design response spectra may be exceeded. These exceedances became important issues in the licensing of the Summer and Perry nuclear power plants. At various times in the past particular concerns have been raised with respect to the hazard and damage potential of small to moderate earthquakes occurring at very shallow depths. In this paper a closer look is taken at these issues. Emphasis is given to the impact of lower bound magnitude on probabilistic hazard calculations and the historical record of damage from small to moderate earthquakes. Limited recommendations are made as to how these issues should be viewed
The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering
International Nuclear Information System (INIS)
Saragoni, G. Rodolfo
2008-01-01
The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand
Kolvankar, V. G.
2013-12-01
During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studying earthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.
Dynamic strains for earthquake source characterization
Barbour, Andrew J.; Crowell, Brendan W
2017-01-01
Strainmeters measure elastodynamic deformation associated with earthquakes over a broad frequency band, with detection characteristics that complement traditional instrumentation, but they are commonly used to study slow transient deformation along active faults and at subduction zones, for example. Here, we analyze dynamic strains at Plate Boundary Observatory (PBO) borehole strainmeters (BSM) associated with 146 local and regional earthquakes from 2004–2014, with magnitudes from M 4.5 to 7.2. We find that peak values in seismic strain can be predicted from a general regression against distance and magnitude, with improvements in accuracy gained by accounting for biases associated with site–station effects and source–path effects, the latter exhibiting the strongest influence on the regression coefficients. To account for the influence of these biases in a general way, we include crustal‐type classifications from the CRUST1.0 global velocity model, which demonstrates that high‐frequency strain data from the PBO BSM network carry information on crustal structure and fault mechanics: earthquakes nucleating offshore on the Blanco fracture zone, for example, generate consistently lower dynamic strains than earthquakes around the Sierra Nevada microplate and in the Salton trough. Finally, we test our dynamic strain prediction equations on the 2011 M 9 Tohoku‐Oki earthquake, specifically continuous strain records derived from triangulation of 137 high‐rate Global Navigation Satellite System Earth Observation Network stations in Japan. Moment magnitudes inferred from these data and the strain model are in agreement when Global Positioning System subnetworks are unaffected by spatial aliasing.
The 2012 Mw5.6 earthquake in Sofia seismogenic zone - is it a slow earthquake
Raykova, Plamena; Solakov, Dimcho; Slavcheva, Krasimira; Simeonova, Stela; Aleksandrova, Irena
2017-04-01
Recently our understanding of tectonic faulting has been shaken by the discoveries of seismic tremor, low frequency earthquakes, slow slip events, and other models of fault slip. These phenomenas represent models of failure that were thought to be non-existent and theoretically impossible only a few years ago. Slow earthquakes are seismic phenomena in which the rupture of geological faults in the earth's crust occurs gradually without creating strong tremors. Despite the growing number of observations of slow earthquakes their origin remains unresolved. Studies show that the duration of slow earthquakes ranges from a few seconds to a few hundred seconds. The regular earthquakes with which most people are familiar release a burst of built-up stress in seconds, slow earthquakes release energy in ways that do little damage. This study focus on the characteristics of the Mw5.6 earthquake occurred in Sofia seismic zone on May 22nd, 2012. The Sofia area is the most populated, industrial and cultural region of Bulgaria that faces considerable earthquake risk. The Sofia seismic zone is located in South-western Bulgaria - the area with pronounce tectonic activity and proved crustal movement. In 19th century the city of Sofia (situated in the centre of the Sofia seismic zone) has experienced two strong earthquakes with epicentral intensity of 10 MSK. During the 20th century the strongest event occurred in the vicinity of the city of Sofia is the 1917 earthquake with MS=5.3 (I0=7-8 MSK64).The 2012 quake occurs in an area characterized by a long quiescence (of 95 years) for moderate events. Moreover, a reduced number of small earthquakes have also been registered in the recent past. The Mw5.6 earthquake is largely felt on the territory of Bulgaria and neighbouring countries. No casualties and severe injuries have been reported. Mostly moderate damages were observed in the cities of Pernik and Sofia and their surroundings. These observations could be assumed indicative for a
di Giovambattista, R.; Tyupkin, Yu
The cyclic migration of weak earthquakes (M 2.2) which occurred during the yearprior to the October 15, 1996 (M = 4.9) Reggio Emilia earthquake isdiscussed in this paper. The onset of this migration was associated with theoccurrence of the October 10, 1995 (M = 4.8) Lunigiana earthquakeabout 90 km southwest from the epicenter of the Reggio Emiliaearthquake. At least three series of earthquakes migrating from theepicentral area of the Lunigiana earthquake in the northeast direction wereobserved. The migration of earthquakes of the first series terminated at adistance of about 30 km from the epicenter of the Reggio Emiliaearthquake. The earthquake migration of the other two series halted atabout 10 km from the Reggio Emilia epicenter. The average rate ofearthquake migration was about 200-300 km/year, while the time ofrecurrence of the observed cycles varied from 68 to 178 days. Weakearthquakes migrated along the transversal fault zones and sometimesjumped from one fault to another. A correlation between the migratingearthquakes and tidal variations is analysed. We discuss the hypothesis thatthe analyzed area is in a state of stress approaching the limit of thelong-term durability of crustal rocks and that the observed cyclic migrationis a result of a combination of a more or less regular evolution of tectonicand tidal variations.
2010-09-02
... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Fighting and preventing post-earthquake fires in nuclear power plant
International Nuclear Information System (INIS)
Lu Xuefeng; Zhang Xin
2011-01-01
Nuclear power plant post-earthquake fires will cause not only personnel injury, severe economic loss, but also serious environmental pollution. For the moment, nuclear power is in a position of rapid development in China. Considering the earthquake-prone characteristics of our country, it is of great engineering importance to investigate the nuclear power plant post-earthquake fires. This article analyzes the cause, influential factors and development characteristics of nuclear power plant post-earthquake fires in details, and summarizes the three principles should be followed in fighting and preventing nuclear power plant post-earthquake fires, such as solving problems in order of importance and urgency, isolation prior to prevention, immediate repair and regular patrol. Three aspects were pointed out that should be paid attention in fighting and preventing post-earthquake fires. (authors)
Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes
International Nuclear Information System (INIS)
Bergman, W.; Elliott, J.; Wilson, K.
1995-01-01
The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system
Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes
Energy Technology Data Exchange (ETDEWEB)
Bergman, W.; Elliott, J.; Wilson, K. [Lawrence Livermore National Laboratory, CA (United States)
1995-02-01
The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system.
Earthquake Ground Motion Selection
2012-05-01
Nonlinear analyses of soils, structures, and soil-structure systems offer the potential for more accurate characterization of geotechnical and structural response under strong earthquake shaking. The increasing use of advanced performance-based desig...
1988 Spitak Earthquake Database
National Oceanic and Atmospheric Administration, Department of Commerce — The 1988 Spitak Earthquake database is an extensive collection of geophysical and geological data, maps, charts, images and descriptive text pertaining to the...
Electromagnetic Manifestation of Earthquakes
Uvarov Vladimir
2017-01-01
In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.
Electromagnetic Manifestation of Earthquakes
Directory of Open Access Journals (Sweden)
Uvarov Vladimir
2017-01-01
Full Text Available In a joint analysis of the results of recording the electrical component of the natural electromagnetic field of the Earth and the catalog of earthquakes in Kamchatka in 2013, unipolar pulses of constant amplitude associated with earthquakes were identified, whose activity is closely correlated with the energy of the electromagnetic field. For the explanation, a hypothesis about the cooperative character of these impulses is proposed.
Charles Darwin's earthquake reports
Galiev, Shamil
2010-05-01
As it is the 200th anniversary of Darwin's birth, 2009 has also been marked as 170 years since the publication of his book Journal of Researches. During the voyage Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great earthquake, which demolished hundreds of buildings, killing and injuring many people. Land was waved, lifted, and cracked, volcanoes awoke and giant ocean waves attacked the coast. Darwin was the first geologist to observe and describe the effects of the great earthquake during and immediately after. These effects sometimes repeated during severe earthquakes; but great earthquakes, like Chile 1835, and giant earthquakes, like Chile 1960, are rare and remain completely unpredictable. This is one of the few areas of science, where experts remain largely in the dark. Darwin suggested that the effects were a result of ‘ …the rending of strata, at a point not very deep below the surface of the earth…' and ‘…when the crust yields to the tension, caused by its gradual elevation, there is a jar at the moment of rupture, and a greater movement...'. Darwin formulated big ideas about the earth evolution and its dynamics. These ideas set the tone for the tectonic plate theory to come. However, the plate tectonics does not completely explain why earthquakes occur within plates. Darwin emphasised that there are different kinds of earthquakes ‘...I confine the foregoing observations to the earthquakes on the coast of South America, or to similar ones, which seem generally to have been accompanied by elevation of the land. But, as we know that subsidence has gone on in other quarters of the world, fissures must there have been formed, and therefore earthquakes...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). These thoughts agree with results of the last publications (see Nature 461, 870-872; 636-639 and 462, 42-43; 87-89). About 200 years ago Darwin gave oneself airs by the
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
Detecting aseismic strain transients from seismicity data
Llenos, A.L.; McGuire, J.J.
2011-01-01
Aseismic deformation transients such as fluid flow, magma migration, and slow slip can trigger changes in seismicity rate. We present a method that can detect these seismicity rate variations and utilize these anomalies to constrain the underlying variations in stressing rate. Because ordinary aftershock sequences often obscure changes in the background seismicity caused by aseismic processes, we combine the stochastic Epidemic Type Aftershock Sequence model that describes aftershock sequences well and the physically based rate- and state-dependent friction seismicity model into a single seismicity rate model that models both aftershock activity and changes in background seismicity rate. We implement this model into a data assimilation algorithm that inverts seismicity catalogs to estimate space-time variations in stressing rate. We evaluate the method using a synthetic catalog, and then apply it to a catalog of M???1.5 events that occurred in the Salton Trough from 1990 to 2009. We validate our stressing rate estimates by comparing them to estimates from a geodetically derived slip model for a large creep event on the Obsidian Buttes fault. The results demonstrate that our approach can identify large aseismic deformation transients in a multidecade long earthquake catalog and roughly constrain the absolute magnitude of the stressing rate transients. Our method can therefore provide a way to detect aseismic transients in regions where geodetic resolution in space or time is poor. Copyright 2011 by the American Geophysical Union.
Continuum-regularized quantum gravity
International Nuclear Information System (INIS)
Chan Huesum; Halpern, M.B.
1987-01-01
The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)
Hayes, G. P.; Earle, P. S.; Benz, H.; Wald, D. J.; Yeck, W. L.
2017-12-01
The U.S. Geological Survey's National Earthquake Information Center (NEIC) responds to about 160 magnitude 6.0 and larger earthquakes every year and is regularly inundated with information requests following earthquakes that cause significant impact. These requests often start within minutes after the shaking occurs and come from a wide user base including the general public, media, emergency managers, and government officials. Over the past several years, the NEIC's earthquake response has evolved its communications strategy to meet the changing needs of users and the evolving media landscape. The NEIC produces a cascade of products starting with basic hypocentral parameters and culminating with estimates of fatalities and economic loss. We speed the delivery of content by prepositioning and automatically generating products such as, aftershock plots, regional tectonic summaries, maps of historical seismicity, and event summary posters. Our goal is to have information immediately available so we can quickly address the response needs of a particular event or sequence. This information is distributed to hundreds of thousands of users through social media, email alerts, programmatic data feeds, and webpages. Many of our products are included in event summary posters that can be downloaded and printed for local display. After significant earthquakes, keeping up with direct inquiries and interview requests from TV, radio, and print reports is always challenging. The NEIC works with the USGS Office of Communications and the USGS Science Information Services to organize and respond to these requests. Written executive summaries reports are produced and distributed to USGS personnel and collaborators throughout the country. These reports are updated during the response to keep our message consistent and information up to date. This presentation will focus on communications during NEIC's rapid earthquake response but will also touch on the broader USGS traditional and
Nowcasting Earthquakes and Tsunamis
Rundle, J. B.; Turcotte, D. L.
2017-12-01
The term "nowcasting" refers to the estimation of the current uncertain state of a dynamical system, whereas "forecasting" is a calculation of probabilities of future state(s). Nowcasting is a term that originated in economics and finance, referring to the process of determining the uncertain state of the economy or market indicators such as GDP at the current time by indirect means. We have applied this idea to seismically active regions, where the goal is to determine the current state of a system of faults, and its current level of progress through the earthquake cycle (http://onlinelibrary.wiley.com/doi/10.1002/2016EA000185/full). Advantages of our nowcasting method over forecasting models include: 1) Nowcasting is simply data analysis and does not involve a model having parameters that must be fit to data; 2) We use only earthquake catalog data which generally has known errors and characteristics; and 3) We use area-based analysis rather than fault-based analysis, meaning that the methods work equally well on land and in subduction zones. To use the nowcast method to estimate how far the fault system has progressed through the "cycle" of large recurring earthquakes, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. We select a "small" region in which the nowcast is to be made, and compute the statistics of a much larger region around the small region. The statistics of the large region are then applied to the small region. For an application, we can define a small region around major global cities, for example a "small" circle of radius 150 km and a depth of 100 km, as well as a "large" earthquake magnitude, for example M6.0. The region of influence of such earthquakes is roughly 150 km radius x 100 km depth, which is the reason these values were selected. We can then compute and rank the seismic risk of the world's major cities in terms of their relative seismic risk
TRANSIENT ELECTRONICS CATEGORIZATION
2017-08-24
AFRL-RY-WP-TR-2017-0169 TRANSIENT ELECTRONICS CATEGORIZATION Dr. Burhan Bayraktaroglu Devices for Sensing Branch Aerospace Components & Subsystems...SUBTITLE TRANSIENT ELECTRONICS CATEGORIZATION 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. AUTHOR(S) Dr. Burhan...88ABW-2017-3747, Clearance Date 31 July 2017. Paper contains color. 14. ABSTRACT Transient electronics is an emerging technology area that lacks proper
International Nuclear Information System (INIS)
Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.
2004-01-01
For the first time on the basis of the Spitak earthquake of December 1988 (Armenia, December 1988) experience it is found out that the earthquake causes intensive and prolonged radon splashes which, rapidly dispersing in the open space of close-to-earth atmosphere, are contrastingly displayed in covered premises (dwellings, schools, kindergartens) even if they are at considerable distance from the earthquake epicenter, and this multiplies the radiation influence on the population. The interval of splashes includes the period from the first fore-shock to the last after-shock, i.e. several months. The area affected by radiation is larger vs. Armenia's territory. The scale of this impact on population is 12 times higher than the number of people injured in Spitak, Leninakan and other settlements (toll of injured - 25 000 people, radiation-induced diseases in people - over 300 000). The influence of radiation directly correlates with the earthquake force. Such a conclusion is underpinned by indoor radon monitoring data for Yerevan since 1987 (120 km from epicenter) 5450 measurements and multivariate analysis with identification of cause-and-effect linkages between geo dynamics of indoor radon under stable and conditions of Earth crust, behavior of radon in different geological mediums during earthquakes, levels of room radon concentrations and effective equivalent dose of radiation impact of radiation dose on health and statistical data on public health provided by the Ministry of Health. The following hitherto unexplained facts can be considered as consequences of prolonged radiation influence on human organism: long-lasting state of apathy and indifference typical of the population of Armenia during the period of more than a year after the earthquake, prevalence of malignant cancer forms in disaster zones, dominating lung cancer and so on. All urban territories of seismically active regions are exposed to the threat of natural earthquake-provoked radiation influence
New regular black hole solutions
International Nuclear Information System (INIS)
Lemos, Jose P. S.; Zanchin, Vilson T.
2011-01-01
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Regular variation on measure chains
Czech Academy of Sciences Publication Activity Database
Řehák, Pavel; Vitovec, J.
2010-01-01
Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475
Manifold Regularized Correlation Object Tracking
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2017-01-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...
On geodesics in low regularity
Sämann, Clemens; Steinbauer, Roland
2018-02-01
We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Geodetic Finite-Fault-based Earthquake Early Warning Performance for Great Earthquakes Worldwide
Ruhl, C. J.; Melgar, D.; Grapenthin, R.; Allen, R. M.
2017-12-01
GNSS-based earthquake early warning (EEW) algorithms estimate fault-finiteness and unsaturated moment magnitude for the largest, most damaging earthquakes. Because large events are infrequent, algorithms are not regularly exercised and insufficiently tested on few available datasets. The Geodetic Alarm System (G-larmS) is a GNSS-based finite-fault algorithm developed as part of the ShakeAlert EEW system in the western US. Performance evaluations using synthetic earthquakes offshore Cascadia showed that G-larmS satisfactorily recovers magnitude and fault length, providing useful alerts 30-40 s after origin time and timely warnings of ground motion for onshore urban areas. An end-to-end test of the ShakeAlert system demonstrated the need for GNSS data to accurately estimate ground motions in real-time. We replay real data from several subduction-zone earthquakes worldwide to demonstrate the value of GNSS-based EEW for the largest, most damaging events. We compare predicted ground acceleration (PGA) from first-alert-solutions with those recorded in major urban areas. In addition, where applicable, we compare observed tsunami heights to those predicted from the G-larmS solutions. We show that finite-fault inversion based on GNSS-data is essential to achieving the goals of EEW.
Earthquake number forecasts testing
Kagan, Yan Y.
2017-10-01
We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness
OGLE-IV Transient Search summary of season 2015b
Wyrzykowski, L.; Kostrzewa-Rutkowska, Z.; Klencki, J.; Sitek, M.; Mroz, P.; Udalski, A.; Kozlowski, S.; Skowron, J.; Poleski, R.; Szymanski, M. K.; Pietrzynski, G.; Soszynski, I.; Ulaczyk, K.; Pietrukowicz, P.
2015-12-01
The OGLE-IV Transient Detection System (Wyrzykowski et al. 2014, AcA,64,197; Kozlowski et al. 2013) in the 2015b transient observing season (from August) has been operating in dual mode: regular as in previous years (detections every couple of days based on at least two positive detections), and rapid (automatised detections within 15 mins after the single frame was taken, details in Klencki et al. in prep.).
Rupture, waves and earthquakes.
Uenishi, Koji
2017-01-01
Normally, an earthquake is considered as a phenomenon of wave energy radiation by rupture (fracture) of solid Earth. However, the physics of dynamic process around seismic sources, which may play a crucial role in the occurrence of earthquakes and generation of strong waves, has not been fully understood yet. Instead, much of former investigation in seismology evaluated earthquake characteristics in terms of kinematics that does not directly treat such dynamic aspects and usually excludes the influence of high-frequency wave components over 1 Hz. There are countless valuable research outcomes obtained through this kinematics-based approach, but "extraordinary" phenomena that are difficult to be explained by this conventional description have been found, for instance, on the occasion of the 1995 Hyogo-ken Nanbu, Japan, earthquake, and more detailed study on rupture and wave dynamics, namely, possible mechanical characteristics of (1) rupture development around seismic sources, (2) earthquake-induced structural failures and (3) wave interaction that connects rupture (1) and failures (2), would be indispensable.
Cascading elastic perturbation in Japan due to the 2012 Mw 8.6 Indian Ocean earthquake
Delorey, Andrew A.; Chao, Kevin; Obara, Kazushige; Johnson, Paul A.
2015-01-01
Since the discovery of extensive earthquake triggering occurring in response to the 1992 Mw (moment magnitude) 7.3 Landers earthquake, it is now well established that seismic waves from earthquakes can trigger other earthquakes, tremor, slow slip, and pore pressure changes. Our contention is that earthquake triggering is one manifestation of a more widespread elastic disturbance that reveals information about Earth’s stress state. Earth’s stress state is central to our understanding of both natural and anthropogenic-induced crustal processes. We show that seismic waves from distant earthquakes may perturb stresses and frictional properties on faults and elastic moduli of the crust in cascading fashion. Transient dynamic stresses place crustal material into a metastable state during which the material recovers through a process termed slow dynamics. This observation of widespread, dynamically induced elastic perturbation, including systematic migration of offshore seismicity, strain transients, and velocity transients, presents a new characterization of Earth’s elastic system that will advance our understanding of plate tectonics, seismicity, and seismic hazards. PMID:26601289
Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.
Buydos, John F., Comp.
An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Person, W.J.
1982-01-01
There were four major earthquakes (7.0-7.9) during this reporting period: two struck in Mexico, one in El Salvador, and one in teh Kuril Islands. Mexico, El Salvador, and China experienced fatalities from earthquakes.
Geometric continuum regularization of quantum field theory
International Nuclear Information System (INIS)
Halpern, M.B.
1989-01-01
An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs
Permeability, storage and hydraulic diffusivity controlled by earthquakes
Brodsky, E. E.; Fulton, P. M.; Xue, L.
2016-12-01
Earthquakes can increase permeability in fractured rocks. In the farfield, such permeability increases are attributed to seismic waves and can last for months after the initial earthquake. Laboratory studies suggest that unclogging of fractures by the transient flow driven by seismic waves is a viable mechanism. These dynamic permeability increases may contribute to permeability enhancement in the seismic clouds accompanying hydraulic fracking. Permeability enhancement by seismic waves could potentially be engineered and the experiments suggest the process will be most effective at a preferred frequency. We have recently observed similar processes inside active fault zones after major earthquakes. A borehole observatory in the fault that generated the M9.0 2011 Tohoku earthquake reveals a sequence of temperature pulses during the secondary aftershock sequence of an M7.3 aftershock. The pulses are attributed to fluid advection by a flow through a zone of transiently increased permeability. Directly after the M7.3 earthquake, the newly damaged fault zone is highly susceptible to further permeability enhancement, but ultimately heals within a month and becomes no longer as sensitive. The observation suggests that the newly damaged fault zone is more prone to fluid pulsing than would be expected based on the long-term permeability structure. Even longer term healing is seen inside the fault zone of the 2008 M7.9 Wenchuan earthquake. The competition between damage and healing (or clogging and unclogging) results in dynamically controlled permeability, storage and hydraulic diffusivity. Recent measurements of in situ fault zone architecture at the 1-10 meter scale suggest that active fault zones often have hydraulic diffusivities near 10-2 m2/s. This uniformity is true even within the damage zone of the San Andreas fault where permeability and storage increases balance each other to achieve this value of diffusivity over a 400 m wide region. We speculate that fault zones
Areas prone to slow slip events impede earthquake rupture propagation and promote afterslip
Rolandone, Frederique; Nocquet, Jean-Mathieu; Mothes, Patricia A.; Jarrin, Paul; Vallée, Martin; Cubas, Nadaya; Hernandez, Stephen; Plain, Morgan; Vaca, Sandro; Font, Yvonne
2018-01-01
At subduction zones, transient aseismic slip occurs either as afterslip following a large earthquake or as episodic slow slip events during the interseismic period. Afterslip and slow slip events are usually considered as distinct processes occurring on separate fault areas governed by different frictional properties. Continuous GPS (Global Positioning System) measurements following the 2016 Mw (moment magnitude) 7.8 Ecuador earthquake reveal that large and rapid afterslip developed at discrete areas of the megathrust that had previously hosted slow slip events. Regardless of whether they were locked or not before the earthquake, these areas appear to persistently release stress by aseismic slip throughout the earthquake cycle and outline the seismic rupture, an observation potentially leading to a better anticipation of future large earthquakes. PMID:29404404
Spectroscopic classification of transients
DEFF Research Database (Denmark)
Stritzinger, M. D.; Fraser, M.; Hummelmose, N. N.
2017-01-01
We report the spectroscopic classification of several transients based on observations taken with the Nordic Optical Telescope (NOT) equipped with ALFOSC, over the nights 23-25 August 2017.......We report the spectroscopic classification of several transients based on observations taken with the Nordic Optical Telescope (NOT) equipped with ALFOSC, over the nights 23-25 August 2017....
Bichisao, Marta; Stallone, Angela
2017-04-01
Making science visual plays a crucial role in the process of building knowledge. In this view, art can considerably facilitate the representation of the scientific content, by offering a different perspective on how a specific problem could be approached. Here we explore the possibility of presenting the earthquake process through visual dance. From a choreographer's point of view, the focus is always on the dynamic relationships between moving objects. The observed spatial patterns (coincidences, repetitions, double and rhythmic configurations) suggest how objects organize themselves in the environment and what are the principles underlying that organization. The identified set of rules is then implemented as a basis for the creation of a complex rhythmic and visual dance system. Recently, scientists have turned seismic waves into sound and animations, introducing the possibility of "feeling" the earthquakes. We try to implement these results into a choreographic model with the aim to convert earthquake sound to a visual dance system, which could return a transmedia representation of the earthquake process. In particular, we focus on a possible method to translate and transfer the metric language of seismic sound and animations into body language. The objective is to involve the audience into a multisensory exploration of the earthquake phenomenon, through the stimulation of the hearing, eyesight and perception of the movements (neuromotor system). In essence, the main goal of this work is to develop a method for a simultaneous visual and auditory representation of a seismic event by means of a structured choreographic model. This artistic representation could provide an original entryway into the physics of earthquakes.
Turkish Children's Ideas about Earthquakes
Simsek, Canan Lacin
2007-01-01
Earthquake, a natural disaster, is among the fundamental problems of many countries. If people know how to protect themselves from earthquake and arrange their life styles in compliance with this, damage they will suffer will reduce to that extent. In particular, a good training regarding earthquake to be received in primary schools is considered…
Person, W.J.
1992-01-01
One major earthquake occurred during this reporting period. This was a magntidue 7.1 in Indonesia (Minahassa Peninsula) on June 20. Earthquake-related deaths were reported in the Western Caucasus (Georgia, USSR) on May 3 and June 15. One earthquake-related death was also reported El Salvador on June 21.
Organizational changes at Earthquakes & Volcanoes
Gordon, David W.
1992-01-01
Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).
Fang, Wang
1979-01-01
The Tangshan earthquake of 1976 was one of the largest earthquakes in recent years. It occurred on July 28 at 3:42 a.m, Beijing (Peking) local time, and had magnitude 7.8, focal depth of 15 kilometers, and an epicentral intensity of XI on the New Chinese Seismic Intensity Scale; it caused serious damage and loss of life in this densely populated industrial city. Now, with the help of people from all over China, the city of Tangshan is being rebuild.
de Ville de Goyet, C
2001-02-01
The Pan American Health Organization (PAHO) has 25 years of experience dealing with major natural disasters. This piece provides a preliminary review of the events taking place in the weeks following the major earthquakes in El Salvador on 13 January and 13 February 2001. It also describes the lessons that have been learned over the last 25 years and the impact that the El Salvador earthquakes and other disasters have had on the health of the affected populations. Topics covered include mass-casualties management, communicable diseases, water supply, managing donations and international assistance, damages to the health-facilities infrastructure, mental health, and PAHO's role in disasters.
Metric regularity and subdifferential calculus
International Nuclear Information System (INIS)
Ioffe, A D
2000-01-01
The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces
Manifold Regularized Correlation Object Tracking.
Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling
2018-05-01
In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.
Ibrion, Mihaela
2018-01-01
This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...
Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing
2018-03-01
The physical mechanism of earthquake remains a challenging issue to be clarified. Seismologists used to attribute shallow earthquake to the elastic rebound of crustal rocks. The seismic energy calculated following the elastic rebound theory and with the data of experimental results upon rocks, however, shows a large discrepancy with measurement — a fact that has been dubbed as “the heat flow paradox”. For the intermediate-focus and deep-focus earthquakes, both occurring in the region of the mantle, there is not reasonable explanation either. This paper will discuss the physical mechanism of earthquake from a new perspective, starting from the fact that both the crust and the mantle are discrete collective system of matters with slow dynamics, as well as from the basic principles of physics, especially some new concepts of condensed matter physics emerged in the recent years. (1) Stress distribution in earth’s crust: Without taking the tectonic force into account, according to the rheological principle of “everything flows”, the normal stress and transverse stress must be balanced due to the effect of gravitational pressure over a long period of time, thus no differential stress in the original crustal rocks is to be expected. The tectonic force is successively transferred and accumulated via stick-slip motions of rock blocks to squeeze the fault gouge and then exerted upon other rock blocks. The superposition of such additional lateral tectonic force and the original stress gives rise to the real-time stress in crustal rocks. The mechanical characteristics of fault gouge are different from rocks as it consists of granular matters. The elastic moduli of the fault gouges are much less than those of rocks, and they become larger with increasing pressure. This peculiarity of the fault gouge leads to a tectonic force increasing with depth in a nonlinear fashion. The distribution and variation of the tectonic stress in the crust are specified. (2) The
Dimensional regularization in configuration space
International Nuclear Information System (INIS)
Bollini, C.G.; Giambiagi, J.J.
1995-09-01
Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Quantitative Earthquake Prediction on Global and Regional Scales
International Nuclear Information System (INIS)
Kossobokov, Vladimir G.
2006-01-01
The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and
Quantitative Earthquake Prediction on Global and Regional Scales
Kossobokov, Vladimir G.
2006-03-01
The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and
Nucleation speed limit on remote fluid induced earthquakes
Parsons, Thomas E.; Akinci, Aybige; Malignini, Luca
2017-01-01
Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes.
Nucleation speed limit on remote fluid-induced earthquakes
Parsons, Tom; Malagnini, Luca; Akinci, Aybige
2017-01-01
Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes. PMID:28845448
Electrical streaming potential precursors to catastrophic earthquakes in China
Directory of Open Access Journals (Sweden)
F. Qian
1997-06-01
Full Text Available The majority of anomalies in self-potential at 7 stations within 160 km from the epicentre showed a similar pattern of rapid onset and slow decay during and before the M 7.8 Tangshan earthquake of 1976. Considering that some of these anomalies associated with episodical spouting from boreholes or the increase in pore pressure in wells, observed anomalies are streaming potential generated by local events of sudden movements and diffusion process of high-pressure fluid in parallel faults. These transient events triggered by tidal forces exhibited a periodic nature and the statistical phenomenon to migrate towards the epicentre about one month before the earthquake. As a result of events, the pore pressure reached its final equilibrium state and was higher than that in the initial state in a large enough section of the fault region. Consequently, local effective shear strength of the material in the fault zone decreased and finally the catastrophic earthquake was induced. Similar phenomena also occurred one month before the M 7.3 Haichen earthquake of 1975. Therefore, a short term earthquake prediction can be made by electrical measurements, which are the kind of geophysical measurements most closely related to pore fluid behaviors of the deep crust.
Transient performance of S-prism
International Nuclear Information System (INIS)
Dubberley, A.E.; Boardman, C.E.; Gamble, R.E.; Hiu, M.M.; Lipps, A.J.; Wu, T.
2001-01-01
S-PRISM is an advanced Fast Reactor plant design that utilizes compact modular pool-type reactors sized to enable factory fabrication and an affordable prototype test of a single Nuclear Steam Supply System (NSSS) for design certification at minimum cost and risk. Based on the success of the previous DOE sponsored Advanced Liquid Metal Reactor (ALMR) program GE has continued to develop and assess the technical viability and economic potential of an up-rated plant called SuperPRISM (S-PRISM). This paper presents the results of transient analyses performed to assess the ability of S-PRISM to accommodate severe accident initiator events. A unique safety capability of S-PRISM is accommodation of the ''higher probability'' accident initiators that led to core melt accidents in prior large LMRs. These events, the Anticipated Transients Without Scram (ATWS) events, are thus the focus of passive safety confirmation analyses. The events included in this assessment are: Unprotected Loss of Flow, Unprotected Loss of Heat Sink, Unprotected Loss of Flow and Heat sink, Unprotected Transient Overpower and Unprotected Safe Shutdown Earthquake. (author)
Leveraging geodetic data to reduce losses from earthquakes
Murray, Jessica R.; Roeloffs, Evelyn A.; Brooks, Benjamin A.; Langbein, John O.; Leith, William S.; Minson, Sarah E.; Svarc, Jerry L.; Thatcher, Wayne R.
2018-04-23
event response products and by expanded use of geodetic imaging data to assess fault rupture and source parameters.Uncertainties in the NSHM, and in regional earthquake models, are reduced by fully incorporating geodetic data into earthquake probability calculations.Geodetic networks and data are integrated into the operations and earthquake information products of the Advanced National Seismic System (ANSS).Earthquake early warnings are improved by more rapidly assessing ground displacement and the dynamic faulting process for the largest earthquakes using real-time geodetic data.Methodology for probabilistic earthquake forecasting is refined by including geodetic data when calculating evolving moment release during aftershock sequences and by better understanding the implications of transient deformation for earthquake likelihood.A geodesy program that encompasses a balanced mix of activities to sustain missioncritical capabilities, grows new competencies through the continuum of fundamental to applied research, and ensures sufficient resources for these endeavors provides a foundation by which the EHP can be a leader in the application of geodesy to earthquake science. With this in mind the following objectives provide a framework to guide EHP efforts:Fully utilize geodetic information to improve key products, such as the NSHM and EEW, and to address new ventures like the USGS Subduction Zone Science Plan.Expand the variety, accuracy, and timeliness of post-earthquake information products, such as PAGER (Prompt Assessment of Global Earthquakes for Response), through incorporation of geodetic observations.Determine if geodetic measurements of transient deformation can significantly improve estimates of earthquake probability.Maintain an observational strategy aligned with the target outcomes of this document that includes continuous monitoring, recording of ephemeral observations, focused data collection for use in research, and application-driven data processing and
International Nuclear Information System (INIS)
Saha, P.
1984-01-01
This chapter reviews the papers on the pressurized water reactor (PWR) and boiling water reactor (BWR) transient analyses given at the American Nuclear Society Topical Meeting on Anticipated and Abnormal Plant Transients in Light Water Reactors. Most of the papers were based on the systems calculations performed using the TRAC-PWR, RELAP5 and RETRAN codes. The status of the nuclear industry in the code applications area is discussed. It is concluded that even though comprehensive computer codes are available for plant transient analysis, there is still a need to exercise engineering judgment, simpler tools and even hand calculations to supplement these codes
Jones, K. B., II; Saxton, P. T.
2013-12-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two
Regularization of Nonmonotone Variational Inequalities
International Nuclear Information System (INIS)
Konnov, Igor V.; Ali, M.S.S.; Mazurkevich, E.O.
2006-01-01
In this paper we extend the Tikhonov-Browder regularization scheme from monotone to rather a general class of nonmonotone multivalued variational inequalities. We show that their convergence conditions hold for some classes of perfectly and nonperfectly competitive economic equilibrium problems
Lattice regularized chiral perturbation theory
International Nuclear Information System (INIS)
Borasoy, Bugra; Lewis, Randy; Ouimet, Pierre-Philippe A.
2004-01-01
Chiral perturbation theory can be defined and regularized on a spacetime lattice. A few motivations are discussed here, and an explicit lattice Lagrangian is reviewed. A particular aspect of the connection between lattice chiral perturbation theory and lattice QCD is explored through a study of the Wess-Zumino-Witten term
2011-01-20
... Meeting SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held at the offices of the Farm... meeting of the Board will be open to the [[Page 3630
Forcing absoluteness and regularity properties
Ikegami, D.
2010-01-01
For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.
Globals of Completely Regular Monoids
Institute of Scientific and Technical Information of China (English)
Wu Qian-qian; Gan Ai-ping; Du Xian-kun
2015-01-01
An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.
Fluid queues and regular variation
Boxma, O.J.
1996-01-01
This paper considers a fluid queueing system, fed by N independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index ¿. We show that its fat tail gives rise to an even
Fluid queues and regular variation
O.J. Boxma (Onno)
1996-01-01
textabstractThis paper considers a fluid queueing system, fed by $N$ independent sources that alternate between silence and activity periods. We assume that the distribution of the activity periods of one or more sources is a regularly varying function of index $zeta$. We show that its fat tail
Empirical laws, regularity and necessity
Koningsveld, H.
1973-01-01
In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.
1 am referring especially to two well-known views, viz. the regularity and
Interval matrices: Regularity generates singularity
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Shary, S.P.
2018-01-01
Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Simulated earthquake ground motions
International Nuclear Information System (INIS)
Vanmarcke, E.H.; Gasparini, D.A.
1977-01-01
The paper reviews current methods for generating synthetic earthquake ground motions. Emphasis is on the special requirements demanded of procedures to generate motions for use in nuclear power plant seismic response analysis. Specifically, very close agreement is usually sought between the response spectra of the simulated motions and prescribed, smooth design response spectra. The features and capabilities of the computer program SIMQKE, which has been widely used in power plant seismic work are described. Problems and pitfalls associated with the use of synthetic ground motions in seismic safety assessment are also pointed out. The limitations and paucity of recorded accelerograms together with the widespread use of time-history dynamic analysis for obtaining structural and secondary systems' response have motivated the development of earthquake simulation capabilities. A common model for synthesizing earthquakes is that of superposing sinusoidal components with random phase angles. The input parameters for such a model are, then, the amplitudes and phase angles of the contributing sinusoids as well as the characteristics of the variation of motion intensity with time, especially the duration of the motion. The amplitudes are determined from estimates of the Fourier spectrum or the spectral density function of the ground motion. These amplitudes may be assumed to be varying in time or constant for the duration of the earthquake. In the nuclear industry, the common procedure is to specify a set of smooth response spectra for use in aseismic design. This development and the need for time histories have generated much practical interest in synthesizing earthquakes whose response spectra 'match', or are compatible with a set of specified smooth response spectra
Current status of JRR-3. After the 3.11 earthquake
International Nuclear Information System (INIS)
Arai, Masaji; Murayama, Yoji; Wada, Shigeru
2012-01-01
JRR-3 at Tokai site of JAEA was in its regular maintenance period, when the Great East Japan Earthquake was taken place on 11th March 2011. The reactor building with their solid foundations and the equipment important to safety survived the earthquake without serious damage and no radioactive leakage has been occurred. Recovery work is planned to be completed by the end of this March. At the same time, check and test of the integrity of all components and seismic assessment to show resistance with the 3.11 earthquake have been carried out. JRR-3 will restart its operation after completing above mentioned procedures. (author)
The HayWired Earthquake Scenario—Earthquake Hazards
Detweiler, Shane T.; Wein, Anne M.
2017-04-24
The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of
Renata Mota Mamede de Carvallo; Carla Gentile Matas; Isabela de Souza Jardim
2008-01-01
Objective: The aim of the present investigation was to check Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests applied together in regular nurseries and Newborn Intensive Care Units (NICU), as well as to describe and compare the results obtained in both groups. Methods: We tested 150 newborns from regular nurseries and 70 from NICU. Rresults: The newborn hearing screening results using Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem...
Regular and conformal regular cores for static and rotating solutions
Energy Technology Data Exchange (ETDEWEB)
Azreg-Aïnou, Mustapha
2014-03-07
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
Regular and conformal regular cores for static and rotating solutions
International Nuclear Information System (INIS)
Azreg-Aïnou, Mustapha
2014-01-01
Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.
PWR systems transient analysis
International Nuclear Information System (INIS)
Kennedy, M.F.; Peeler, G.B.; Abramson, P.B.
1985-01-01
Analysis of transients in pressurized water reactor (PWR) systems involves the assessment of the response of the total plant, including primary and secondary coolant systems, steam piping and turbine (possibly including the complete feedwater train), and various control and safety systems. Transient analysis is performed as part of the plant safety analysis to insure the adequacy of the reactor design and operating procedures and to verify the applicable plant emergency guidelines. Event sequences which must be examined are developed by considering possible failures or maloperations of plant components. These vary in severity (and calculational difficulty) from a series of normal operational transients, such as minor load changes, reactor trips, valve and pump malfunctions, up to the double-ended guillotine rupture of a primary reactor coolant system pipe known as a Large Break Loss of Coolant Accident (LBLOCA). The focus of this paper is the analysis of all those transients and accidents except loss of coolant accidents
Transients: The regulator's view
International Nuclear Information System (INIS)
Sheron, B.W.; Speis, T.P.
1984-01-01
This chapter attempts to clarify the basis for the regulator's concerns for transient events. Transients are defined as both anticipated operational occurrences and postulated accidents. Recent operational experience, supplemented by improved probabilistic risk analysis methods, has demonstrated that non-LOCA transient events can be significant contributors to overall risk. Topics considered include lessons learned from events and issues, the regulations governing plant transients, multiple failures, different failure frequencies, operator errors, and public pressure. It is concluded that the formation of Owners Groups and Regulatory Response Groups within the owners groups are positive signs of the industry's concern for safety and responsible dealing with the issues affecting both the US NRC and the industry
Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?
Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.
2018-02-01
The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and
Transient multivariable sensor evaluation
Energy Technology Data Exchange (ETDEWEB)
Vilim, Richard B.; Heifetz, Alexander
2017-02-21
A method and system for performing transient multivariable sensor evaluation. The method and system includes a computer system for identifying a model form, providing training measurement data, generating a basis vector, monitoring system data from sensor, loading the system data in a non-transient memory, performing an estimation to provide desired data and comparing the system data to the desired data and outputting an alarm for a defective sensor.
Sobolev, Stephan V.; Muldashev, Iskander A.
2017-12-01
Subduction is substantially multiscale process where the stresses are built by long-term tectonic motions, modified by sudden jerky deformations during earthquakes, and then restored by following multiple relaxation processes. Here we develop a cross-scale thermomechanical model aimed to simulate the subduction process from 1 min to million years' time scale. The model employs elasticity, nonlinear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences and by using an adaptive time step algorithm, recreates the deformation process as observed naturally during the seismic cycle and multiple seismic cycles. The model predicts that viscosity in the mantle wedge drops by more than three orders of magnitude during the great earthquake with a magnitude above 9. As a result, the surface velocities just an hour or day after the earthquake are controlled by viscoelastic relaxation in the several hundred km of mantle landward of the trench and not by the afterslip localized at the fault as is currently believed. Our model replicates centuries-long seismic cycles exhibited by the greatest earthquakes and is consistent with the postseismic surface displacements recorded after the Great Tohoku Earthquake. We demonstrate that there is no contradiction between extremely low mechanical coupling at the subduction megathrust in South Chile inferred from long-term geodynamic models and appearance of the largest earthquakes, like the Great Chile 1960 Earthquake.
Historical earthquake research in Austria
Hammerl, Christa
2017-12-01
Austria has a moderate seismicity, and on average the population feels 40 earthquakes per year or approximately three earthquakes per month. A severe earthquake with light building damage is expected roughly every 2 to 3 years in Austria. Severe damage to buildings ( I 0 > 8° EMS) occurs significantly less frequently, the average period of recurrence is about 75 years. For this reason the historical earthquake research has been of special importance in Austria. The interest in historical earthquakes in the past in the Austro-Hungarian Empire is outlined, beginning with an initiative of the Austrian Academy of Sciences and the development of historical earthquake research as an independent research field after the 1978 "Zwentendorf plebiscite" on whether the nuclear power plant will start up. The applied methods are introduced briefly along with the most important studies and last but not least as an example of a recently carried out case study, one of the strongest past earthquakes in Austria, the earthquake of 17 July 1670, is presented. The research into historical earthquakes in Austria concentrates on seismic events of the pre-instrumental period. The investigations are not only of historical interest, but also contribute to the completeness and correctness of the Austrian earthquake catalogue, which is the basis for seismic hazard analysis and as such benefits the public, communities, civil engineers, architects, civil protection, and many others.
Earthquake hazard evaluation for Switzerland
International Nuclear Information System (INIS)
Ruettener, E.
1995-01-01
Earthquake hazard analysis is of considerable importance for Switzerland, a country with moderate seismic activity but high economic values at risk. The evaluation of earthquake hazard, i.e. the determination of return periods versus ground motion parameters, requires a description of earthquake occurrences in space and time. In this study the seismic hazard for major cities in Switzerland is determined. The seismic hazard analysis is based on historic earthquake records as well as instrumental data. The historic earthquake data show considerable uncertainties concerning epicenter location and epicentral intensity. A specific concept is required, therefore, which permits the description of the uncertainties of each individual earthquake. This is achieved by probability distributions for earthquake size and location. Historical considerations, which indicate changes in public earthquake awareness at various times (mainly due to large historical earthquakes), as well as statistical tests have been used to identify time periods of complete earthquake reporting as a function of intensity. As a result, the catalog is judged to be complete since 1878 for all earthquakes with epicentral intensities greater than IV, since 1750 for intensities greater than VI, since 1600 for intensities greater than VIII, and since 1300 for intensities greater than IX. Instrumental data provide accurate information about the depth distribution of earthquakes in Switzerland. In the Alps, focal depths are restricted to the uppermost 15 km of the crust, whereas below the northern Alpine foreland earthquakes are distributed throughout the entire crust (30 km). This depth distribution is considered in the final hazard analysis by probability distributions. (author) figs., tabs., refs
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Energy Technology Data Exchange (ETDEWEB)
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Valizadeh Alvan, H.; Mansor, S.; Haydari Azad, F.
2012-12-01
The possibility of earthquake prediction in the frame of several days to few minutes before its occurrence has stirred interest among researchers, recently. Scientists believe that the new theories and explanations of the mechanism of this natural phenomenon are trustable and can be the basis of future prediction efforts. During the last thirty years experimental researches resulted in some pre-earthquake events which are now recognized as confirmed warning signs (precursors) of past known earthquakes. With the advances in in-situ measurement devices and data analysis capabilities and the emergence of satellite-based data collectors, monitoring the earth's surface is now a regular work. Data providers are supplying researchers from all over the world with high quality and validated imagery and non-imagery data. Surface Latent Heat Flux (SLHF) or the amount of energy exchange in the form of water vapor between the earth's surface and atmosphere has been frequently reported as an earthquake precursor during the past years. The accumulated stress in the earth's crust during the preparation phase of earthquakes is said to be the main cause of temperature anomalies weeks to days before the main event and subsequent shakes. Chemical and physical interactions in the presence of underground water lead to higher water evaporation prior to inland earthquakes. On the other hand, the leak of Radon gas occurred as rocks break during earthquake preparation causes the formation of airborne ions and higher Air Temperature (AT) prior to main event. Although co-analysis of direct and indirect observation for precursory events is considered as a promising method for future successful earthquake prediction, without proper and thorough knowledge about the geological setting, atmospheric factors and geodynamics of the earthquake-prone regions we will not be able to identify anomalies due to seismic activity in the earth's crust. Active faulting is a key factor in identification of the
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Regularized strings with extrinsic curvature
International Nuclear Information System (INIS)
Ambjoern, J.; Durhuus, B.
1987-07-01
We analyze models of discretized string theories, where the path integral over world sheet variables is regularized by summing over triangulated surfaces. The inclusion of curvature in the action is a necessity for the scaling of the string tension. We discuss the physical properties of models with extrinsic curvature terms in the action and show that the string tension vanishes at the critical point where the bare extrinsic curvature coupling tends to infinity. Similar results are derived for models with intrinsic curvature. (orig.)
Circuit complexity of regular languages
Czech Academy of Sciences Publication Activity Database
Koucký, Michal
2009-01-01
Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009
Identified EM Earthquake Precursors
Jones, Kenneth, II; Saxton, Patrick
2014-05-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for
Main regularities of radiolytic transformations of bifunctional organic compounds
International Nuclear Information System (INIS)
Petryaev, E.P.; Shadyro, O.I.
1985-01-01
General regularities of the radiolysis of bifunctional organic compounds (α-diols, ethers of α-diols, amino alcohols, hydroxy aldehydes and hydroxy asids) in aqueous solutions from the early stages of the process to formation of finite products are traced. It is pointed out that the most characteristic course of radiation-chemical, transformation of bifunctional compounds in agueous solutions in the fragmentation process with monomolecular decomposition of primary radicals of initial substrances and simultaneous scission of two vicinal in respect to radical centre bonds via five-membered cyclic transient state. The data obtained are of importance for molecular radiobiology
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Geophysical Anomalies and Earthquake Prediction
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require
Directory of Open Access Journals (Sweden)
Angeletti Chiara
2012-06-01
Full Text Available Abstract Introduction On 6 April 2009, at 03:32 local time, an Mw 6.3 earthquake hit the Abruzzi region of central Italy causing widespread damage in the City of L Aquila and its nearby villages. The earthquake caused 308 casualties and over 1,500 injuries, displaced more than 25,000 people and induced significant damage to more than 10,000 buildings in the L'Aquila region. Objectives This observational retrospective study evaluated the prevalence and drug treatment of pain in the five weeks following the L'Aquila earthquake (April 6, 2009. Methods 958 triage documents were analysed for patients pain severity, pain type, and treatment efficacy. Results A third of pain patients reported pain with a prevalence of 34.6%. More than half of pain patients reported severe pain (58.8%. Analgesic agents were limited to available drugs: anti-inflammatory agents, paracetamol, and weak opioids. Reduction in verbal numerical pain scores within the first 24 hours after treatment was achieved with the medications at hand. Pain prevalence and characterization exhibited a biphasic pattern with acute pain syndromes owing to trauma occurring in the first 15 days after the earthquake; traumatic pain then decreased and re-surged at around week five, owing to rebuilding efforts. In the second through fourth week, reports of pain occurred mainly owing to relapses of chronic conditions. Conclusions This study indicates that pain is prevalent during natural disasters, may exhibit a discernible pattern over the weeks following the event, and current drug treatments in this region may be adequate for emergency situations.
Episodic slow slip events in the Japan subduction zone before the 2011 Tohoku-Oki earthquake
Ito, Yoshihiro; Hino, Ryota; Kido, Motoyuki; Fujimoto, Hiromi; Osada, Yukihito; Inazu, Daisuke; Ohta, Yusaku; Iinuma, Takeshi; Ohzono, Mako; Miura, Satoshi; Mishina, Masaaki; Suzuki, Kensuke; Tsuji, Takeshi; Ashi, Juichiro
2013-07-01
We describe two transient slow slip events that occurred before the 2011 Tohoku-Oki earthquake. The first transient crustal deformation, which occurred over a period of a week in November 2008, was recorded simultaneously using ocean-bottom pressure gauges and an on-shore volumetric strainmeter; this deformation has been interpreted as being an M6.8 episodic slow slip event. The second had a duration exceeding 1 month and was observed in February 2011, just before the 2011 Tohoku-Oki earthquake; the moment magnitude of this event reached 7.0. The two events preceded interplate earthquakes of magnitudes M6.1 (December 2008) and M7.3 (March 9, 2011), respectively; the latter is the largest foreshock of the 2011 Tohoku-Oki earthquake. Our findings indicate that these slow slip events induced increases in shear stress, which in turn triggered the interplate earthquakes. The slow slip event source area on the fault is also located within the downdip portion of the huge-coseismic-slip area of the 2011 earthquake. This demonstrates episodic slow slip and seismic behavior occurring on the same portions of the megathrust fault, suggesting that the faults undergo slip in slow slip events can also rupture seismically.
Fault lubrication during earthquakes.
Di Toro, G; Han, R; Hirose, T; De Paola, N; Nielsen, S; Mizoguchi, K; Ferri, F; Cocco, M; Shimamoto, T
2011-03-24
The determination of rock friction at seismic slip rates (about 1 m s(-1)) is of paramount importance in earthquake mechanics, as fault friction controls the stress drop, the mechanical work and the frictional heat generated during slip. Given the difficulty in determining friction by seismological methods, elucidating constraints are derived from experimental studies. Here we review a large set of published and unpublished experiments (∼300) performed in rotary shear apparatus at slip rates of 0.1-2.6 m s(-1). The experiments indicate a significant decrease in friction (of up to one order of magnitude), which we term fault lubrication, both for cohesive (silicate-built, quartz-built and carbonate-built) rocks and non-cohesive rocks (clay-rich, anhydrite, gypsum and dolomite gouges) typical of crustal seismogenic sources. The available mechanical work and the associated temperature rise in the slipping zone trigger a number of physicochemical processes (gelification, decarbonation and dehydration reactions, melting and so on) whose products are responsible for fault lubrication. The similarity between (1) experimental and natural fault products and (2) mechanical work measures resulting from these laboratory experiments and seismological estimates suggests that it is reasonable to extrapolate experimental data to conditions typical of earthquake nucleation depths (7-15 km). It seems that faults are lubricated during earthquakes, irrespective of the fault rock composition and of the specific weakening mechanism involved.
Housing Damage Following Earthquake
1989-01-01
An automobile lies crushed under the third story of this apartment building in the Marina District after the Oct. 17, 1989, Loma Prieta earthquake. The ground levels are no longer visible because of structural failure and sinking due to liquefaction. Sand and soil grains have faces that can cause friction as they roll and slide against each other, or even cause sticking and form small voids between grains. This complex behavior can cause soil to behave like a liquid under certain conditions such as earthquakes or when powders are handled in industrial processes. Mechanics of Granular Materials (MGM) experiments aboard the Space Shuttle use the microgravity of space to simulate this behavior under conditons that carnot be achieved in laboratory tests on Earth. MGM is shedding light on the behavior of fine-grain materials under low effective stresses. Applications include earthquake engineering, granular flow technologies (such as powder feed systems for pharmaceuticals and fertilizers), and terrestrial and planetary geology. Nine MGM specimens have flown on two Space Shuttle flights. Another three are scheduled to fly on STS-107. The principal investigator is Stein Sture of the University of Colorado at Boulder. Credit: J.K. Nakata, U.S. Geological Survey.
Do Earthquakes Shake Stock Markets?
Ferreira, Susana; Karali, Berna
2015-01-01
This paper examines how major earthquakes affected the returns and volatility of aggregate stock market indices in thirty-five financial markets over the last twenty years. Results show that global financial markets are resilient to shocks caused by earthquakes even if these are domestic. Our analysis reveals that, in a few instances, some macroeconomic variables and earthquake characteristics (gross domestic product per capita, trade openness, bilateral trade flows, earthquake magnitude, a tsunami indicator, distance to the epicenter, and number of fatalities) mediate the impact of earthquakes on stock market returns, resulting in a zero net effect. However, the influence of these variables is market-specific, indicating no systematic pattern across global capital markets. Results also demonstrate that stock market volatility is unaffected by earthquakes, except for Japan.
Earthquake engineering for nuclear facilities
Kuno, Michiya
2017-01-01
This book is a comprehensive compilation of earthquake- and tsunami-related technologies and knowledge for the design and construction of nuclear facilities. As such, it covers a wide range of fields including civil engineering, architecture, geotechnical engineering, mechanical engineering, and nuclear engineering, for the development of new technologies providing greater resistance against earthquakes and tsunamis. It is crucial both for students of nuclear energy courses and for young engineers in nuclear power generation industries to understand the basics and principles of earthquake- and tsunami-resistant design of nuclear facilities. In Part I, "Seismic Design of Nuclear Power Plants", the design of nuclear power plants to withstand earthquakes and tsunamis is explained, focusing on buildings, equipment's, and civil engineering structures. In Part II, "Basics of Earthquake Engineering", fundamental knowledge of earthquakes and tsunamis as well as the dynamic response of structures and foundation ground...
The significance of the structural regularity for the seismic response of buildings
International Nuclear Information System (INIS)
Hampe, E.; Goldbach, R.; Schwarz, J.
1991-01-01
The paper gives an state-of-the-art report about the international design practice and submits fundamentals for a systematic approach to the solution of that problem. Different criteria of regularity are presented and discussed with respect to EUROCODE Nr. 8. Still remaining questions and the main topics of future research activities are announced and come into consideration. Frame structures with or without additional stiffening wall elements are investigated to illustrate the qualitative differences of the vibrational properties and the earthquake response of regular and irregular systems. (orig./HP) [de
Desherevskii, A. V.; Sidorin, A. Ya.
2017-12-01
Due to the initiation of the Hellenic Unified Seismic Network (HUSN) in late 2007, the quality of observation significantly improved by 2011. For example, the representative magnitude level considerably has decreased and the number of annually recorded events has increased. The new observational system highly expanded the possibilities for studying regularities in seismicity. In view of this, the authors revisited their studies of the diurnal periodicity of representative earthquakes in Greece that was revealed earlier in the earthquake catalog before 2011. We use 18 samples of earthquakes of different magnitudes taken from the catalog of Greek earthquakes from 2011 to June 2016 to derive a series of the number of earthquakes for each of them and calculate its average diurnal course. To increase the reliability of the results, we compared the data for two regions. With a high degree of statistical significance, we have obtained that no diurnal periodicity can be found for strongly representative earthquakes. This finding differs from the estimates obtained earlier from an analysis of the catalog of earthquakes at the same area for 1995-2004 and 2005-2010, i.e., before the initiation of the Hellenic Unified Seismic Network. The new results are consistent with the hypothesis of noise discrimination (observational selection) explaining the cause of the diurnal variation of earthquakes with different sensitivity of the seismic network in daytime and nighttime periods.
The failure of earthquake failure models
Gomberg, J.
2001-01-01
In this study I show that simple heuristic models and numerical calculations suggest that an entire class of commonly invoked models of earthquake failure processes cannot explain triggering of seismicity by transient or "dynamic" stress changes, such as stress changes associated with passing seismic waves. The models of this class have the common feature that the physical property characterizing failure increases at an accelerating rate when a fault is loaded (stressed) at a constant rate. Examples include models that invoke rate state friction or subcritical crack growth, in which the properties characterizing failure are slip or crack length, respectively. Failure occurs when the rate at which these grow accelerates to values exceeding some critical threshold. These accelerating failure models do not predict the finite durations of dynamically triggered earthquake sequences (e.g., at aftershock or remote distances). Some of the failure models belonging to this class have been used to explain static stress triggering of aftershocks. This may imply that the physical processes underlying dynamic triggering differs or that currently applied models of static triggering require modification. If the former is the case, we might appeal to physical mechanisms relying on oscillatory deformations such as compaction of saturated fault gouge leading to pore pressure increase, or cyclic fatigue. However, if dynamic and static triggering mechanisms differ, one still needs to ask why static triggering models that neglect these dynamic mechanisms appear to explain many observations. If the static and dynamic triggering mechanisms are the same, perhaps assumptions about accelerating failure and/or that triggering advances the failure times of a population of inevitable earthquakes are incorrect.
Revealing the cluster of slow transients behind a large slow slip event.
Frank, William B; Rousset, Baptiste; Lasserre, Cécile; Campillo, Michel
2018-05-01
Capable of reaching similar magnitudes to large megathrust earthquakes [ M w (moment magnitude) > 7], slow slip events play a major role in accommodating tectonic motion on plate boundaries through predominantly aseismic rupture. We demonstrate here that large slow slip events are a cluster of short-duration slow transients. Using a dense catalog of low-frequency earthquakes as a guide, we investigate the M w 7.5 slow slip event that occurred in 2006 along the subduction interface 40 km beneath Guerrero, Mexico. We show that while the long-period surface displacement, as recorded by Global Positioning System, suggests a 6-month duration, the motion in the direction of tectonic release only sporadically occurs over 55 days, and its surface signature is attenuated by rapid relocking of the plate interface. Our proposed description of slow slip as a cluster of slow transients forces us to re-evaluate our understanding of the physics and scaling of slow earthquakes.
Tacina, R. R.
1984-01-01
Non-steady combustion problems can result from engine sources such as accelerations, decelerations, nozzle adjustments, augmentor ignition, and air perturbations into and out of the compressor. Also non-steady combustion can be generated internally from combustion instability or self-induced oscillations. A premixed-prevaporized combustor would be particularly sensitive to flow transients because of its susceptability to flashback-autoignition and blowout. An experimental program, the Transient Flow Combustion Study is in progress to study the effects of air and fuel flow transients on a premixed-prevaporized combustor. Preliminary tests performed at an inlet air temperature of 600 K, a reference velocity of 30 m/s, and a pressure of 700 kPa. The airflow was reduced to 1/3 of its original value in a 40 ms ramp before flashback occurred. Ramping the airflow up has shown that blowout is more sensitive than flashback to flow transients. Blowout occurred with a 25 percent increase in airflow (at a constant fuel-air ratio) in a 20 ms ramp. Combustion resonance was found at some conditions and may be important in determining the effects of flow transients.
Earthquake resistant design of structures
International Nuclear Information System (INIS)
Choi, Chang Geun; Kim, Gyu Seok; Lee, Dong Geun
1990-02-01
This book tells of occurrence of earthquake and damage analysis of earthquake, equivalent static analysis method, application of equivalent static analysis method, dynamic analysis method like time history analysis by mode superposition method and direct integration method, design spectrum analysis considering an earthquake-resistant design in Korea. Such as analysis model and vibration mode, calculation of base shear, calculation of story seismic load and combine of analysis results.
,
1997-01-01
The severity of an earthquake can be expressed in terms of both intensity and magnitude. However, the two terms are quite different, and they are often confused. Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter. Magnitude is related to the amount of seismic energy released at the hypocenter of the earthquake. It is based on the amplitude of the earthquake waves recorded on instruments
Self-organization of spatio-temporal earthquake clusters
Directory of Open Access Journals (Sweden)
S. Hainzl
2000-01-01
Full Text Available Cellular automaton versions of the Burridge-Knopoff model have been shown to reproduce the power law distribution of event sizes; that is, the Gutenberg-Richter law. However, they have failed to reproduce the occurrence of foreshock and aftershock sequences correlated with large earthquakes. We show that in the case of partial stress recovery due to transient creep occurring subsequently to earthquakes in the crust, such spring-block systems self-organize into a statistically stationary state characterized by a power law distribution of fracture sizes as well as by foreshocks and aftershocks accompanying large events. In particular, the increase of foreshock and the decrease of aftershock activity can be described by, aside from a prefactor, the same Omori law. The exponent of the Omori law depends on the relaxation time and on the spatial scale of transient creep. Further investigations concerning the number of aftershocks, the temporal variation of aftershock magnitudes, and the waiting time distribution support the conclusion that this model, even "more realistic" physics in missed, captures in some ways the origin of the size distribution as well as spatio-temporal clustering of earthquakes.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
International Nuclear Information System (INIS)
Dawes, W.R. Jr.; Fischer, T.A.; Huang, C.C.C.; Meyer, W.J.; Smith, C.S.; Blanchard, R.A.; Fortier, T.J.
1986-01-01
N-channel power FETs offer significant advantages in power conditioning circuits. Similiarily to all MOS technologies, power FET devices are vulnerable to ionizing radiation, and are particularily susceptible to burn-out in high dose rate irradiations (>1E10 rads(Si)/sec.), which precludes their use in many military environments. This paper will summarize the physical mechanisms responsible for burn-out, and discuss various fabrication techniques designed to improve the transient hardness of power FETs. Power FET devices were fabricated with several of these techniques, and data will be presented which demonstrates that transient hardness levels in excess of 1E12 rads(Si)/sec. are easily achievable
International Nuclear Information System (INIS)
Cooke, C.M.; Frick, G.; Roumie, M.
1993-01-01
Electrical measurements are presented for the construction of a model for the study of transients in the Vivitron. Observation of the transmission of electrical pulses in the porticos clearly shows transmission-line behaviour. Measurements of the vector impedance of the outer porticos show the same transmission-line properties, but also gives a description of the modification from a pure transmission line due to the circular electrodes. The results of this investigation should allow the construction of a computer model which predicts the evolution of the transients in the case of a spark in the Vivitron. (orig.)
Satellite Infrared Radiation Measurements Prior to the Major Earthquakes
Ouzounov, Dimitar; Pulintes, S.; Bryant, N.; Taylor, Patrick; Freund, F.
2005-01-01
This work describes our search for a relationship between tectonic stresses and increases in mid-infrared (IR) flux as part of a possible ensemble of electromagnetic (EM) phenomena that may be related to earthquake activity. We present and &scuss observed variations in thermal transients and radiation fields prior to the earthquakes of Jan 22, 2003 Colima (M6.7) Mexico, Sept. 28 .2004 near Parkfield (M6.0) in California and Northern Sumatra (M8.5) Dec. 26,2004. Previous analysis of earthquake events has indicated the presence of an IR anomaly, where temperatures increased or did not return to its usual nighttime value. Our procedures analyze nighttime satellite data that records the general condtion of the ground after sunset. We have found from the MODIS instrument data that five days before the Colima earthquake the IR land surface nighttime temperature rose up to +4 degrees C in a 100 km radius around the epicenter. The IR transient field recorded by MODIS in the vicinity of Parkfield, also with a cloud free environment, was around +1 degree C and is significantly smaller than the IR anomaly around the Colima epicenter. Ground surface temperatures near the Parkfield epicenter four days prior to the earthquake show steady increase. However, on the night preceding the quake, a significant drop in relative humidity was indicated, process similar to those register prior to the Colima event. Recent analyses of continuous ongoing long- wavelength Earth radiation (OLR) indicate significant and anomalous variability prior to some earthquakes. The cause of these anomalies is not well understood but could be the result of a triggering by an interaction between the lithosphere-hydrosphere and atmospheric related to changes in the near surface electrical field and/or gas composition prior to the earthquake. The OLR anomaly usually covers large areas surrounding the main epicenter. We have found strong anomalies signal (two sigma) along the epicentral area signals on Dec 21
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Regularized Label Relaxation Linear Regression.
Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu
2018-04-01
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Cascading elastic perturbation in Japan due to the 2012 M w 8.6 Indian Ocean earthquake.
Delorey, Andrew A; Chao, Kevin; Obara, Kazushige; Johnson, Paul A
2015-10-01
Since the discovery of extensive earthquake triggering occurring in response to the 1992 M w (moment magnitude) 7.3 Landers earthquake, it is now well established that seismic waves from earthquakes can trigger other earthquakes, tremor, slow slip, and pore pressure changes. Our contention is that earthquake triggering is one manifestation of a more widespread elastic disturbance that reveals information about Earth's stress state. Earth's stress state is central to our understanding of both natural and anthropogenic-induced crustal processes. We show that seismic waves from distant earthquakes may perturb stresses and frictional properties on faults and elastic moduli of the crust in cascading fashion. Transient dynamic stresses place crustal material into a metastable state during which the material recovers through a process termed slow dynamics. This observation of widespread, dynamically induced elastic perturbation, including systematic migration of offshore seismicity, strain transients, and velocity transients, presents a new characterization of Earth's elastic system that will advance our understanding of plate tectonics, seismicity, and seismic hazards.
Foreshocks, aftershocks, and earthquake probabilities: Accounting for the landers earthquake
Jones, Lucile M.
1994-01-01
The equation to determine the probability that an earthquake occurring near a major fault will be a foreshock to a mainshock on that fault is modified to include the case of aftershocks to a previous earthquake occurring near the fault. The addition of aftershocks to the background seismicity makes its less probable that an earthquake will be a foreshock, because nonforeshocks have become more common. As the aftershocks decay with time, the probability that an earthquake will be a foreshock increases. However, fault interactions between the first mainshock and the major fault can increase the long-term probability of a characteristic earthquake on that fault, which will, in turn, increase the probability that an event is a foreshock, compensating for the decrease caused by the aftershocks.
Gambling scores for earthquake predictions and forecasts
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Navigating Earthquake Physics with High-Resolution Array Back-Projection
Meng, Lingsen
Understanding earthquake source dynamics is a fundamental goal of geophysics. Progress toward this goal has been slow due to the gap between state-of-art earthquake simulations and the limited source imaging techniques based on conventional low-frequency finite fault inversions. Seismic array processing is an alternative source imaging technique that employs the higher frequency content of the earthquakes and provides finer detail of the source process with few prior assumptions. While the back-projection provides key observations of previous large earthquakes, the standard beamforming back-projection suffers from low resolution and severe artifacts. This thesis introduces the MUSIC technique, a high-resolution array processing method that aims to narrow the gap between the seismic observations and earthquake simulations. The MUSIC is a high-resolution method taking advantage of the higher order signal statistics. The method has not been widely used in seismology yet because of the nonstationary and incoherent nature of the seismic signal. We adapt MUSIC to transient seismic signal by incorporating the Multitaper cross-spectrum estimates. We also adopt a "reference window" strategy that mitigates the "swimming artifact," a systematic drift effect in back projection. The improved MUSIC back projections allow the imaging of recent large earthquakes in finer details which give rise to new perspectives on dynamic simulations. In the 2011 Tohoku-Oki earthquake, we observe frequency-dependent rupture behaviors which relate to the material variation along the dip of the subduction interface. In the 2012 off-Sumatra earthquake, we image the complicated ruptures involving orthogonal fault system and an usual branching direction. This result along with our complementary dynamic simulations probes the pressure-insensitive strength of the deep oceanic lithosphere. In another example, back projection is applied to the 2010 M7 Haiti earthquake recorded at regional distance. The
The search for Infrared radiation prior to major earthquakes
Ouzounov, D.; Taylor, P.; Pulinets, S.
2004-12-01
This work describes our search for a relationship between tectonic stresses and electro-chemical and thermodynamic processes in the Earth and increases in mid-IR flux as part of a possible ensemble of electromagnetic (EM) phenomena that may be related to earthquake activity. Recent analysis of continuous ongoing long- wavelength Earth radiation (OLR) indicates significant and anomalous variability prior to some earthquakes. The cause of these anomalies is not well understood but could be the result of a triggering by an interaction between the lithosphere-hydrosphere and atmospheric related to changes in the near surface electrical field and gas composition prior to the earthquake. The OLR anomaly covers large areas surrounding the main epicenter. We have use the NOAA IR data to differentiate between the global and seasonal variability and these transient local anomalies. Indeed, on the basis of a temporal and spatial distribution analysis, an anomaly pattern is found to occur several days prior some major earthquakes. The significance of these observations was explored using data sets of some recent worldwide events.
On the agreement between small-world-like OFC model and real earthquakes
Energy Technology Data Exchange (ETDEWEB)
Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)
2015-03-20
In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.
On the agreement between small-world-like OFC model and real earthquakes
International Nuclear Information System (INIS)
Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo
2015-01-01
In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places
DEFF Research Database (Denmark)
Rode, Carsten
1998-01-01
Analytical theory of transient heat conduction.Fourier's law. General heat conducation equation. Thermal diffusivity. Biot and Fourier numbers. Lumped analysis and time constant. Semi-infinite body: fixed surface temperature, convective heat transfer at the surface, or constant surface heat flux...
Transient cavitation in pipelines
Kranenburg, C.
1974-01-01
The aim of the present study is to set up a one-dimensional mathematical model, which describes the transient flow in pipelines, taking into account the influence of cavitation and free gas. The flow will be conceived of as a three-phase flow of the liquid, its vapour and non-condensible gas. The
Defence against earthquakes: a red thread of history
International Nuclear Information System (INIS)
Guidoboni, Emanuela
2015-01-01
This note gives a short overview from the ancient world down to the end of the eighteenth century (before engineering began as a science, that is) on the idea of “housing safety” and earthquakes. The idea varies, but persists throughout the cultural and economic contexts of history’s changing societies, and in relation to class and lifestyle. Historical research into earthquakes in Italy from the ancient world to the twentieth century has shown how variable the idea actually is, as emerges from theoretical treatises, practical wisdom and projects drawn up in the wake of destructive events. In the seventeenth century the theoretical interpretation of earthquakes began to swing towards a mechanistic view of the Earth, affecting how the effects and propagation of earthquakes were observed. Strong earthquakes continued to occur and cause damage, and after yet another seismic disaster – Umbria 1751 – new building techniques were advocated. The attempt was to make house walls bind more solidly by special linking of the wooden structure of floors and roof beams. Following the massive seismic crisis of February-March 1783, which left central and southern Calabria in ruins, a new house was proposed, called 'baraccata': it was a wooden structure filled in with light materials. This was actually already to be founding the ancient Mediterranean basin (including Pompei); but only at that time was it perfected, proposed by engineers and circulated as an important building innovation. At the end of the eighteenth century town planners came to the fore in the search for safe housing. They suggested new regular shapes, broad grid-plan streets with a specific view to achieving housing safety and ensuring an escape route in case of earthquake. Such rules and regulations were then abandoned or lost, proving that it is not enough to try out [it
Timing of transients: quantifying reaching times and transient behavior in complex systems
Kittel, Tim; Heitzig, Jobst; Webster, Kevin; Kurths, Jürgen
2017-08-01
In dynamical systems, one may ask how long it takes for a trajectory to reach the attractor, i.e. how long it spends in the transient phase. Although for a single trajectory the mathematically precise answer may be infinity, it still makes sense to compare different trajectories and quantify which of them approaches the attractor earlier. In this article, we categorize several problems of quantifying such transient times. To treat them, we propose two metrics, area under distance curve and regularized reaching time, that capture two complementary aspects of transient dynamics. The first, area under distance curve, is the distance of the trajectory to the attractor integrated over time. It measures which trajectories are ‘reluctant’, i.e. stay distant from the attractor for long, or ‘eager’ to approach it right away. Regularized reaching time, on the other hand, quantifies the additional time (positive or negative) that a trajectory starting at a chosen initial condition needs to approach the attractor as compared to some reference trajectory. A positive or negative value means that it approaches the attractor by this much ‘earlier’ or ‘later’ than the reference, respectively. We demonstrated their substantial potential for application with multiple paradigmatic examples uncovering new features.
Generation of earthquake signals
International Nuclear Information System (INIS)
Kjell, G.
1994-01-01
Seismic verification can be performed either as a full scale test on a shaker table or as numerical calculations. In both cases it is necessary to have an earthquake acceleration time history. This report describes generation of such time histories by filtering white noise. Analogue and digital filtering methods are compared. Different methods of predicting the response spectrum of a white noise signal filtered by a band-pass filter are discussed. Prediction of both the average response level and the statistical variation around this level are considered. Examples with both the IEEE 301 standard response spectrum and a ground spectrum suggested for Swedish nuclear power stations are included in the report
Dynamic triggering of low magnitude earthquakes in the Middle American Subduction Zone
Escudero, C. R.; Velasco, A. A.
2010-12-01
We analyze global and Middle American Subduction Zone (MASZ) seismicity from 1998 to 2008 to quantify the transient stresses effects at teleseismic distances. We use the Bulletin of the International Seismological Centre Catalog (ISCCD) published by the Incorporated Research Institutions for Seismology (IRIS). To identify MASZ seismicity changes due to distant, large (Mw >7) earthquakes, we first identify local earthquakes that occurred before and after the mainshocks. We then group the local earthquakes within a cluster radius between 75 to 200 km. We obtain statistics based on characteristics of both mainshocks and local earthquakes clusters, such as local cluster-mainshock azimuth, mainshock focal mechanism, and local earthquakes clusters within the MASZ. Due to lateral variations of the dip along the subducted oceanic plate, we divide the Mexican subduction zone in four segments. We then apply the Paired Samples Statistical Test (PSST) to the sorted data to identify increment, decrement or either in the local seismicity associated with distant large earthquakes. We identify dynamic triggering for all MASZ segments produced by large earthquakes emerging from specific azimuths, as well as, a decrease for some cases. We find no depend of seismicity changes due to focal mainshock mechanism.
Earthquakes Threaten Many American Schools
Bailey, Nancy E.
2010-01-01
Millions of U.S. children attend schools that are not safe from earthquakes, even though they are in earthquake-prone zones. Several cities and states have worked to identify and repair unsafe buildings, but many others have done little or nothing to fix the problem. The reasons for ignoring the problem include political and financial ones, but…
Make an Earthquake: Ground Shaking!
Savasci, Funda
2011-01-01
The main purposes of this activity are to help students explore possible factors affecting the extent of the damage of earthquakes and learn the ways to reduce earthquake damages. In these inquiry-based activities, students have opportunities to develop science process skills and to build an understanding of the relationship among science,…
Sun, Qilin
2017-04-01
High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.
Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes
2010-05-01
outermost layer was treated this way, the core of the shrines was made of simple rectangular blocks. The system resisted both in-plane and out-of-plane shaking quite well, as proven by survival of many shrines for more than a millennium, and by fracturing of blocks instead of displacement during the 2006 Yogyakarta earthquake. Systematic use or disuse of known earthquake-resistant techniques in any one society depends on the perception of earthquake risk and on available financial resources. Earthquake-resistant construction practice is significantly more expensive than regular construction. Perception is influenced mostly by short individual and longer social memory. If earthquake recurrence time is longer than the preservation of social memory, if damaging quakes fade into the past, societies commit the same construction mistakes again and again. Length of the memory is possibly about a generation's lifetime. Events occurring less frequently than 25-30 years can be readily forgotten, and the risk of recurrence considered as negligible, not worth the costs of safe construction practices. (Example of recurring flash floods in Hungary.) Frequent earthquakes maintain safe construction practices, like the Java masonry technique throughout at least two centuries, and like the Fachwerk tradition on Modern Aegean Samos throughout 500 years of political and technological development. (OTKA K67583)
Earthquake Catalogue of the Caucasus
Godoladze, T.; Gok, R.; Tvaradze, N.; Tumanova, N.; Gunia, I.; Onur, T.
2016-12-01
The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283 (Ms˜7.0, Io=9); Lechkhumi-Svaneti earthquake of 1350 (Ms˜7.0, Io=9); and the Alaverdi earthquake of 1742 (Ms˜6.8, Io=9). Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088 (Ms˜6.5, Io=9) and the Akhalkalaki earthquake of 1899 (Ms˜6.3, Io =8-9). Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; Racha earthquake of 1991 (Ms=7.0), is the largest event ever recorded in the region; Barisakho earthquake of 1992 (M=6.5); Spitak earthquake of 1988 (Ms=6.9, 100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of the various national networks (Georgia (˜25 stations), Azerbaijan (˜35 stations), Armenia (˜14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. In order to improve seismic data quality a catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences/NSMC, Ilia State University) in the framework of regional joint project (Armenia, Azerbaijan, Georgia, Turkey, USA) "Probabilistic Seismic Hazard Assessment (PSHA) in the Caucasus. The catalogue consists of more then 80,000 events. First arrivals of each earthquake of Mw>=4.0 have been carefully examined. To reduce calculation errors, we corrected arrivals from the seismic records. We improved locations of the events and recalculate Moment magnitudes in order to obtain unified magnitude
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
From inactive to regular jogger
DEFF Research Database (Denmark)
Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup
study was conducted using individual semi-structured interviews on how a successful long-term behavior change had been achieved. Ten informants were purposely selected from participants in the DANO-RUN research project (7 men, 3 women, average age 41.5). Interviews were performed on the basis of Theory...... of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...
Tessellating the Sphere with Regular Polygons
Soto-Johnson, Hortensia; Bechthold, Dawn
2004-01-01
Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.
On the equivalence of different regularization methods
International Nuclear Information System (INIS)
Brzezowski, S.
1985-01-01
The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)
The uniqueness of the regularization procedure
International Nuclear Information System (INIS)
Brzezowski, S.
1981-01-01
On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)
The CATDAT damaging earthquakes database
Directory of Open Access Journals (Sweden)
J. E. Daniell
2011-08-01
Full Text Available The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes.
Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon.
Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected, and economic losses (direct, indirect, aid, and insured.
Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto ($214 billion USD damage; 2011 HNDECI-adjusted dollars compared to the 2011 Tohoku (>$300 billion USD at time of writing, 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product, exchange rate, wage information, population, HDI (Human Development Index, and insurance information have been collected globally to form comparisons.
This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global
The CATDAT damaging earthquakes database
Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.
2011-08-01
The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.
First measurements by the DEMETER satellite of ionospheric perturbations associated with earthquakes
International Nuclear Information System (INIS)
Blecki, J.; Slominski, J.; Wronowski, R.; Parrot, M.; Lagoutte, D.; Brochot, J.-Y.
2005-01-01
DEMETER is a French project of a low altitude microsatellite. Its main scientific goals are to study the ionospheric perturbations related to the seismic and volcanic activity and the Earth's electromagnetic environment. The payload of the DEMETER microsatellite allows to measure waves and also some important plasma parameters (ion composition, electron density and temperature, energetic particles). The launch of the satellite was done by the Ukrainian rocket Dnepr from Baikonour on June 29, 2004. The regular measurements started in the middle of July. Since the beginning of the data gathering some earthquakes with magnitude M>6 were registered. The analysis of the data has been done for selected passes of DEMETER over the epicenters. The results of the measurements for two Earthquakes- one during the pass 5 days before Japanese Earthquake (23.10.2004) and the second one just 3 minutes after Mexico Earthquake (9.09.04) will be shown. (author)
Directory of Open Access Journals (Sweden)
Someya Toshiyuki
2006-09-01
Full Text Available Abstract Background An earthquake measuring 6.8 on the Richter scale struck the Niigata-Chuetsu region of Japan at 5.56 P.M. on the 23rd of October, 2004. The earthquake was followed by sustained occurrence of numerous aftershocks, which delayed reconstruction of community lifelines. Even one year after the earthquake, 9,160 people were living in temporary housing. Such a devastating earthquake and life after the earthquake in an unfamiliar environment should cause psychological distress, especially among the elderly. Methods Psychological distress was measured using the 12-item General Health Questionnaire (GHQ-12 in 2,083 subjects (69% response rate who were living in transient housing five months after the earthquake. GHQ-12 was scored using the original method, Likert scoring and corrected method. The subjects were asked to assess their psychological status before the earthquake, their psychological status at the most stressful time after the earthquake and their psychological status at five months after the earthquake. Exploratory and confirmatory factor analysis was used to reveal the factor structure of GHQ12. Multiple regression analysis was performed to analyze the relationship between various background factors and GHQ-12 score and its subscale. Results GHQ-12 scores were significantly elevated at the most stressful time and they were significantly high even at five months after the earthquake. Factor analysis revealed that a model consisting of two factors (social dysfunction and dysphoria using corrected GHQ scoring showed a high level of goodness-of-fit. Multiple regression analysis revealed that age of subjects affected GHQ-12 scores. GHQ-12 score as well as its factor 'social dysfunction' scale were increased with increasing age of subjects at five months after the earthquake. Conclusion Impaired psychological recovery was observed even at five months after the Niigata-Chuetsu Earthquake in the elderly. The elderly were more
Diurnal changes of earthquake activity and geomagnetic Sq-variations
Directory of Open Access Journals (Sweden)
G. Duma
2003-01-01
Full Text Available Statistic analyses demonstrate that the probability of earthquake occurrence in many earthquake regions strongly depends on the time of day, that is on Local Time (e.g. Conrad, 1909, 1932; Shimshoni, 1971; Duma, 1997; Duma and Vilardo, 1998. This also applies to strong earthquake activity. Moreover, recent observations reveal an involvement of the regular diurnal variations of the Earth’s magnetic field, commonly known as Sq-variations, in this geodynamic process of changing earthquake activity with the time of day (Duma, 1996, 1999. In the article it is attempted to quantify the forces which result from the interaction between the induced Sq-variation currents in the Earth’s lithosphere and the regional Earth’s magnetic field, in order to assess the influence on the tectonic stress field and on seismic activity. A reliable model is obtained, which indicates a high energy involved in this process. The effect of Sq-induction is compared with the results of the large scale electromagnetic experiment "Khibiny" (Velikhov, 1989, where a giant artificial current loop was activated in the Barents Sea.
Application of Turchin's method of statistical regularization
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Regular extensions of some classes of grammars
Nijholt, Antinus
Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular
Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault
Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.
2010-01-01
It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.
Influence of the Wenchuan earthquake on self-reported irregular menstrual cycles in surviving women.
Li, Xiao-Hong; Qin, Lang; Hu, Han; Luo, Shan; Li, Lei; Fan, Wei; Xiao, Zhun; Li, Ying-Xing; Li, Shang-Wei
2011-09-01
To explore the influence of stress induced by the Wenchuan earthquake on the menstrual cycles of surviving women. Self-reports of the menstrual cycles of 473 women that survived the Wenchuan earthquake were analyzed. Menstrual regularity was defined as menses between 21 and 35 days long. The death of a child or the loss of property and social resources was verified for all surviving women. The severity of these losses was assessed and graded as high, little, and none. About 21% of the study participants reported that their menstrual cycles became irregular after the Wenchuan earthquake, and this percentage was significantly higher than before the earthquake (6%, p irregularity after the earthquake. Association analyses showed that some stressors of the Wenchuan earthquake were strongly associated with self-reports of menstrual irregularity, including the loss of children (RR: 1.58; 95% CI: 1.09, 2.28), large amounts of property (RR: 1.49; 95% CI: 1.03, 2.15), social resources (RR: 1.34; 95% CI: 1.00, 1.80) and the hormonal contraception use (RR: 1.62; 95% CI: 1.21, 1.83). Self-reported menstrual irregularity is common in women that survived the Wenchuan earthquake, especially in those who lost children, large amounts of property and social resources.
Directory of Open Access Journals (Sweden)
Mahesh M Choudhary
2015-01-01
Full Text Available We report a case of transient osteoporosis of the hip (TOH in a 50-year-old man including the clinical presentation, diagnostic studies, management, and clinical progress. TOH is a rare self-limiting condition that typically affects middle-aged men or, less frequently, women in the third trimester of pregnancy. Affected individuals present clinically with acute hip pain, limping gait, and limited ranges of hip motion. TOH may begin spontaneously or after a minor trauma. Radiographs are typically unremarkable but magnetic resonance (MR imaging studies yield findings consistent with bone marrow edema. TOH is referred to as regional migratory osteoporosis (RMO if it travels to other joints or the contralateral hip. TOH often resembles osteonecrosis but the two conditions must be differentiated due to different prognoses and management approaches. The term TOH is often used interchangeably and synonymously with transient bone marrow edema (TBME.
Earthquake Emergency Education in Dushanbe, Tajikistan
Mohadjer, Solmaz; Bendick, Rebecca; Halvorson, Sarah J.; Saydullaev, Umed; Hojiboev, Orifjon; Stickler, Christine; Adam, Zachary R.
2010-01-01
We developed a middle school earthquake science and hazards curriculum to promote earthquake awareness to students in the Central Asian country of Tajikistan. These materials include pre- and post-assessment activities, six science activities describing physical processes related to earthquakes, five activities on earthquake hazards and mitigation…
Determination of Design Basis Earthquake ground motion
International Nuclear Information System (INIS)
Kato, Muneaki
1997-01-01
This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)
Determination of Design Basis Earthquake ground motion
Energy Technology Data Exchange (ETDEWEB)
Kato, Muneaki [Japan Atomic Power Co., Tokyo (Japan)
1997-03-01
This paper describes principle of determining of Design Basis Earthquake following the Examination Guide, some examples on actual sites including earthquake sources to be considered, earthquake response spectrum and simulated seismic waves. In sppendix of this paper, furthermore, seismic safety review for N.P.P designed before publication of the Examination Guide was summarized with Check Basis Earthquake. (J.P.N.)
Stability of Ignition Transients
V.E. Zarko
1991-01-01
The problem of ignition stability arises in the case of the action of intense external heat stimuli when, resulting from the cut-off of solid substance heating, momentary ignition is followed by extinction. Physical pattern of solid propellant ignition is considered and ignition criteria available in the literature are discussed. It is shown that the above mentioned problem amounts to transient burning at a given arbitrary temperature distribution in the condensed phase. A brief survey...
Transient FDTD simulation validation
Jauregui Tellería, Ricardo; Riu Costa, Pere Joan; Silva Martínez, Fernando
2010-01-01
In computational electromagnetic simulations, most validation methods have been developed until now to be used in the frequency domain. However, the EMC analysis of the systems in the frequency domain many times is not enough to evaluate the immunity of current communication devices. Based on several studies, in this paper we propose an alternative method of validation of the transients in time domain allowing a rapid and objective quantification of the simulations results.
MHD aspects of coronal transients
International Nuclear Information System (INIS)
Anzer, U.
1979-10-01
If one defines coronal transients as events which occur in the solar corona on rapid time scales (< approx. several hours) then one would have to include a large variety of solar phenomena: flares, sprays, erupting prominences, X-ray transients, white light transients, etc. Here we shall focus our attention on the latter two phenomena. (orig.) 891 WL/orig. 892 RDG
Physics of Earthquake Rupture Propagation
Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh
2018-05-01
A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.
Radon observation for earthquake prediction
Energy Technology Data Exchange (ETDEWEB)
Wakita, Hiroshi [Tokyo Univ. (Japan)
1998-12-31
Systematic observation of groundwater radon for the purpose of earthquake prediction began in Japan in late 1973. Continuous observations are conducted at fixed stations using deep wells and springs. During the observation period, significant precursory changes including the 1978 Izu-Oshima-kinkai (M7.0) earthquake as well as numerous coseismic changes were observed. At the time of the 1995 Kobe (M7.2) earthquake, significant changes in chemical components, including radon dissolved in groundwater, were observed near the epicentral region. Precursory changes are presumably caused by permeability changes due to micro-fracturing in basement rock or migration of water from different sources during the preparation stage of earthquakes. Coseismic changes may be caused by seismic shaking and by changes in regional stress. Significant drops of radon concentration in groundwater have been observed after earthquakes at the KSM site. The occurrence of such drops appears to be time-dependent, and possibly reflects changes in the regional stress state of the observation area. The absence of radon drops seems to be correlated with periods of reduced regional seismic activity. Experience accumulated over the two past decades allows us to reach some conclusions: 1) changes in groundwater radon do occur prior to large earthquakes; 2) some sites are particularly sensitive to earthquake occurrence; and 3) the sensitivity changes over time. (author)
Earthquake prediction by Kina Method
International Nuclear Information System (INIS)
Kianoosh, H.; Keypour, H.; Naderzadeh, A.; Motlagh, H.F.
2005-01-01
Earthquake prediction has been one of the earliest desires of the man. Scientists have worked hard to predict earthquakes for a long time. The results of these efforts can generally be divided into two methods of prediction: 1) Statistical Method, and 2) Empirical Method. In the first method, earthquakes are predicted using statistics and probabilities, while the second method utilizes variety of precursors for earthquake prediction. The latter method is time consuming and more costly. However, the result of neither method has fully satisfied the man up to now. In this paper a new method entitled 'Kiana Method' is introduced for earthquake prediction. This method offers more accurate results yet lower cost comparing to other conventional methods. In Kiana method the electrical and magnetic precursors are measured in an area. Then, the time and the magnitude of an earthquake in the future is calculated using electrical, and in particular, electrical capacitors formulas. In this method, by daily measurement of electrical resistance in an area we make clear that the area is capable of earthquake occurrence in the future or not. If the result shows a positive sign, then the occurrence time and the magnitude can be estimated by the measured quantities. This paper explains the procedure and details of this prediction method. (authors)
Overturning behaviour of nuclear power plant structures during earthquakes
International Nuclear Information System (INIS)
Dalal, J.S.; Perumalswami, P.R.
1977-01-01
Nuclear power plant structures are designed to withstand severe postulated seismic forces. Structures subjected to such forces may be found to ''overturn'', if the factor of safety is computed in the traditional way, treating these forces as static. This study considers the transient nature of the problem and draws distinction between rocking, tipping and overturning. Responses of typical nuclear power plant structures to earthquake motions are used to assess their overturning potential more realistically. Structures founded on both rock and soil are considered. It is demonstrated that the traditional factor of safety, when smaller than unity, indicates only minimal base rotations and not necessarily overturning. (auth.)
Precisely locating the Klamath Falls, Oregon, earthquakes
Qamar, A.; Meagher, K.L.
1993-01-01
The Klamath Falls earthquakes on September 20, 1993, were the largest earthquakes centered in Oregon in more than 50 yrs. Only the magnitude 5.75 Milton-Freewater earthquake in 1936, which was centered near the Oregon-Washington border and felt in an area of about 190,000 sq km, compares in size with the recent Klamath Falls earthquakes. Although the 1993 earthquakes surprised many local residents, geologists have long recognized that strong earthquakes may occur along potentially active faults that pass through the Klamath Falls area. These faults are geologically related to similar faults in Oregon, Idaho, and Nevada that occasionally spawn strong earthquakes.
Ionospheric phenomena before strong earthquakes
Directory of Open Access Journals (Sweden)
A. S. Silina
2001-01-01
Full Text Available A statistical analysis of several ionospheric parameters before earthquakes with magnitude M > 5.5 located less than 500 km from an ionospheric vertical sounding station is performed. Ionospheric effects preceding "deep" (depth h > 33 km and "crust" (h 33 km earthquakes were analysed separately. Data of nighttime measurements of the critical frequencies foF2 and foEs, the frequency fbEs and Es-spread at the middle latitude station Dushanbe were used. The frequencies foF2 and fbEs are proportional to the square root of the ionization density at heights of 300 km and 100 km, respectively. It is shown that two days before the earthquakes the values of foF2 averaged over the morning hours (00:00 LT–06:00 LT and of fbEs averaged over the nighttime hours (18:00 LT–06:00 LT decrease; the effect is stronger for the "deep" earthquakes. Analysing the coefficient of semitransparency which characterizes the degree of small-scale turbulence, it was shown that this value increases 1–4 days before "crust" earthquakes, and it does not change before "deep" earthquakes. Studying Es-spread which manifests itself as diffuse Es track on ionograms and characterizes the degree of large-scale turbulence, it was found that the number of Es-spread observations increases 1–3 days before the earthquakes; for "deep" earthquakes the effect is more intensive. Thus it may be concluded that different mechanisms of energy transfer from the region of earthquake preparation to the ionosphere occur for "deep" and "crust" events.
Class of regular bouncing cosmologies
Vasilić, Milovan
2017-06-01
In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.
The Pocatello Valley, Idaho, earthquake
Rogers, A. M.; Langer, C.J.; Bucknam, R.C.
1975-01-01
A Richter magnitude 6.3 earthquake occurred at 8:31 p.m mountain daylight time on March 27, 1975, near the Utah-Idaho border in Pocatello Valley. The epicenter of the main shock was located at 42.094° N, 112.478° W, and had a focal depth of 5.5 km. This earthquake was the largest in the continental United States since the destructive San Fernando earthquake of February 1971. The main shock was preceded by a magnitude 4.5 foreshock on March 26.
The threat of silent earthquakes
Cervelli, Peter
2004-01-01
Not all earthquakes shake the ground. The so-called silent types are forcing scientists to rethink their understanding of the way quake-prone faults behave. In rare instances, silent earthquakes that occur along the flakes of seaside volcanoes may cascade into monstrous landslides that crash into the sea and trigger towering tsunamis. Silent earthquakes that take place within fault zones created by one tectonic plate diving under another may increase the chance of ground-shaking shocks. In other locations, however, silent slip may decrease the likelihood of destructive quakes, because they release stress along faults that might otherwise seem ready to snap.
Gradual unlocking of plate boundary controlled initiation of the 2014 Iquique earthquake.
Schurr, Bernd; Asch, Günter; Hainzl, Sebastian; Bedford, Jonathan; Hoechner, Andreas; Palo, Mauro; Wang, Rongjiang; Moreno, Marcos; Bartsch, Mitja; Zhang, Yong; Oncken, Onno; Tilmann, Frederik; Dahm, Torsten; Victor, Pia; Barrientos, Sergio; Vilotte, Jean-Pierre
2014-08-21
On 1 April 2014, Northern Chile was struck by a magnitude 8.1 earthquake following a protracted series of foreshocks. The Integrated Plate Boundary Observatory Chile monitored the entire sequence of events, providing unprecedented resolution of the build-up to the main event and its rupture evolution. Here we show that the Iquique earthquake broke a central fraction of the so-called northern Chile seismic gap, the last major segment of the South American plate boundary that had not ruptured in the past century. Since July 2013 three seismic clusters, each lasting a few weeks, hit this part of the plate boundary with earthquakes of increasing peak magnitudes. Starting with the second cluster, geodetic observations show surface displacements that can be associated with slip on the plate interface. These seismic clusters and their slip transients occupied a part of the plate interface that was transitional between a fully locked and a creeping portion. Leading up to this earthquake, the b value of the foreshocks gradually decreased during the years before the earthquake, reversing its trend a few days before the Iquique earthquake. The mainshock finally nucleated at the northern end of the foreshock area, which skirted a locked patch, and ruptured mainly downdip towards higher locking. Peak slip was attained immediately downdip of the foreshock region and at the margin of the locked patch. We conclude that gradual weakening of the central part of the seismic gap accentuated by the foreshock activity in a zone of intermediate seismic coupling was instrumental in causing final failure, distinguishing the Iquique earthquake from most great earthquakes. Finally, only one-third of the gap was broken and the remaining locked segments now pose a significant, increased seismic hazard with the potential to host an earthquake with a magnitude of >8.5.
USGS Earthquake Program GPS Use Case : Earthquake Early Warning
2015-03-12
USGS GPS receiver use case. Item 1 - High Precision User (federal agency with Stafford Act hazard alert responsibilities for earthquakes, volcanoes and landslides nationwide). Item 2 - Description of Associated GPS Application(s): The USGS Eart...
EARTHQUAKE-INDUCED DEFORMATION STRUCTURES AND RELATED TO EARTHQUAKE MAGNITUDES
Directory of Open Access Journals (Sweden)
Savaş TOPAL
2003-02-01
Full Text Available Earthquake-induced deformation structures which are called seismites may helpful to clasify the paleoseismic history of a location and to estimate the magnitudes of the potention earthquakes in the future. In this paper, seismites were investigated according to the types formed in deep and shallow lake sediments. Seismites are observed forms of sand dikes, introduced and fractured gravels and pillow structures in shallow lakes and pseudonodules, mushroom-like silts protruding laminites, mixed layers, disturbed varved lamination and loop bedding in deep lake sediments. Earthquake-induced deformation structures, by benefiting from previous studies, were ordered according to their formations and earthquake magnitudes. In this order, the lowest eartquake's record is loop bedding and the highest one is introduced and fractured gravels in lacustrine deposits.
Twitter earthquake detection: Earthquake monitoring in a social world
Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.
2011-01-01
The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.
Extreme value statistics and thermodynamics of earthquakes. Large earthquakes
Energy Technology Data Exchange (ETDEWEB)
Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics
2000-06-01
A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.
On to what extent stresses resulting from the earth's surface trigger earthquakes
Klose, C. D.
2009-12-01
The debate on static versus dynamic earthquake triggering mainly concentrates on endogenous crustal forces, including fault-fault interactions or seismic wave transients of remote earthquakes. Incomprehensibly, earthquake triggering due to surface processes, however, still receives little scientific attention. This presentation continues a discussion on the hypothesis of how “tiny” stresses stemming from the earth's surface can trigger major earthquakes, such as for example, China's M7.9 Wenchuan earthquake of May 2008. This seismic event is thought to be triggered by up to 1.1 billion metric tons of water (~130m) that accumulated in the Minjiang River Valley at the eastern margin of the Longmen Shan. Specifically, the water level rose by ~80m (static), with additional seasonal water level changes of ~50m (dynamic). Two and a half years prior to mainshock, static and dynamic Coulomb failure stresses were induced on the nearby Beichuan thrust fault system at <17km depth. Triggering stresses were equivalent to levels of daily tides and perturbed a fault area measuring 416+/-96km^2. The mainshock ruptured after 2.5 years when only the static stressing regime was predominant and the transient stressing (seasonal water level) was infinitesimal small. The short triggering delay of about 2 years suggests that the Beichuan fault might have been near the end of its seismic cycle, which may also confirm what previous geological findings have indicated. This presentation shows on to what extend the static and 1-year periodic triggering stress perturbations a) accounted for equivalent tectonic loading, given a 4-10kyr earthquake cycle and b) altered the background seismicity beneath the valley, i.e., daily event rate and earthquake size distribution.
Directory of Open Access Journals (Sweden)
Ching-Chou Fu
2017-01-01
Full Text Available Taiwan is tectonically situated in a terrain resulting from the oblique collision between the Philippine Sea plate and the continental margin of the Asiatic plate, with a continuous stress causing the density of strong-moderate earthquakes and regional active faults. The continuous time series of soil radon for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. Earthquake prediction is not still operative but these correlations should be added to the literature about seismo-geochemical transients associated to strong earthquakes. Rain-pore pressure related variations, crustal weakness at the studied faults system is consistent with the simultaneous radon anomalies observed. During the observations, a significant increase of soil radon concentrations was observed at Chunglun-T1 (CL-T1, Hsinhua (HH, Pingtung (PT, and Chihshan (CS stations approximately two weeks before the Meinong earthquake (ML = 6.6, 6 February 2016 in Southern Taiwan. The precursory changes in a multi-stations array may reflect the preparation stage of a large earthquake. Precursory signals are observed simultaneously and it can apply certain algorithms the approximate location and magnitude of the impending earthquake.
Centrality in earthquake multiplex networks
Lotfi, Nastaran; Darooneh, Amir Hossein; Rodrigues, Francisco A.
2018-06-01
Seismic time series has been mapped as a complex network, where a geographical region is divided into square cells that represent the nodes and connections are defined according to the sequence of earthquakes. In this paper, we map a seismic time series to a temporal network, described by a multiplex network, and characterize the evolution of the network structure in terms of the eigenvector centrality measure. We generalize previous works that considered the single layer representation of earthquake networks. Our results suggest that the multiplex representation captures better earthquake activity than methods based on single layer networks. We also verify that the regions with highest seismological activities in Iran and California can be identified from the network centrality analysis. The temporal modeling of seismic data provided here may open new possibilities for a better comprehension of the physics of earthquakes.
International Nuclear Information System (INIS)
Zirin, H.; Tanaka, K.
1981-01-01
We present data on magnetic transients (mgtr's) observed in flares on 1980 July 1 and 5 with Big Bear videomagnetograph (VMG). The 1980 July 1 event was a white light flare in which a strong bipolar mgtr was observed, and a definite change in the sunspots occurred at the time of the flare. In the 1980 July 5 flare, a mgtr was observed in only one polarity, and, although no sunspot changes occurred simultaneous with the flare, major spot changes occurred in a period of hours
Familial Transient Global Amnesia
Directory of Open Access Journals (Sweden)
R.Rhys Davies
2012-12-01
Full Text Available Following an episode of typical transient global amnesia (TGA, a female patient reported similar clinical attacks in 2 maternal aunts. Prior reports of familial TGA are few, and no previous account of affected relatives more distant than siblings or parents was discovered in a literature survey. The aetiology of familial TGA is unknown. A pathophysiological mechanism akin to that in migraine attacks, comorbidity reported in a number of the examples of familial TGA, is one possibility. The study of familial TGA cases might facilitate the understanding of TGA aetiology.
Earthquake focal mechanism forecasting in Italy for PSHA purposes
Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola
2018-01-01
In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.
Earthquake Triggering in the September 2017 Mexican Earthquake Sequence
Fielding, E. J.; Gombert, B.; Duputel, Z.; Huang, M. H.; Liang, C.; Bekaert, D. P.; Moore, A. W.; Liu, Z.; Ampuero, J. P.
2017-12-01
Southern Mexico was struck by four earthquakes with Mw > 6 and numerous smaller earthquakes in September 2017, starting with the 8 September Mw 8.2 Tehuantepec earthquake beneath the Gulf of Tehuantepec offshore Chiapas and Oaxaca. We study whether this M8.2 earthquake triggered the three subsequent large M>6 quakes in southern Mexico to improve understanding of earthquake interactions and time-dependent risk. All four large earthquakes were extensional despite the the subduction of the Cocos plate. The traditional definition of aftershocks: likely an aftershock if it occurs within two rupture lengths of the main shock soon afterwards. Two Mw 6.1 earthquakes, one half an hour after the M8.2 beneath the Tehuantepec gulf and one on 23 September near Ixtepec in Oaxaca, both fit as traditional aftershocks, within 200 km of the main rupture. The 19 September Mw 7.1 Puebla earthquake was 600 km away from the M8.2 shock, outside the standard aftershock zone. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite fault total slip models for the M8.2, M7.1, and M6.1 Ixtepec earthquakes. The early M6.1 aftershock was too close in time and space to the M8.2 to measure with InSAR or GPS. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our preliminary geodetic slip model for the M8.2 quake shows significant slip extended > 150 km NW from the hypocenter, longer than slip in the v1 finite-fault model (FFM) from teleseismic waveforms posted by G. Hayes at USGS NEIC. Our slip model for the M7.1 earthquake is similar to the v2 NEIC FFM. Interferograms for the M6.1 Ixtepec quake confirm the shallow depth in the upper-plate crust and show centroid is about 30 km SW of the NEIC epicenter, a significant NEIC location bias, but consistent with cluster relocations (E. Bergman, pers. comm.) and with Mexican SSN location. Coulomb static stress
Protracted fluvial recovery from medieval earthquakes, Pokhara, Nepal
Stolle, Amelie; Bernhardt, Anne; Schwanghart, Wolfgang; Andermann, Christoff; Schönfeldt, Elisabeth; Seidemann, Jan; Adhikari, Basanta R.; Merchel, Silke; Rugel, Georg; Fort, Monique; Korup, Oliver
2016-04-01
River response to strong earthquake shaking in mountainous terrain often entails the flushing of sediments delivered by widespread co-seismic landsliding. Detailed mass-balance studies following major earthquakes in China, Taiwan, and New Zealand suggest fluvial recovery times ranging from several years to decades. We report a detailed chronology of earthquake-induced valley fills in the Pokhara region of western-central Nepal, and demonstrate that rivers continue to adjust to several large medieval earthquakes to the present day, thus challenging the notion of transient fluvial response to seismic disturbance. The Pokhara valley features one of the largest and most extensively dated sedimentary records of earthquake-triggered sedimentation in the Himalayas, and independently augments paleo-seismological archives obtained mainly from fault trenches and historic documents. New radiocarbon dates from the catastrophically deposited Pokhara Formation document multiple phases of extremely high geomorphic activity between ˜700 and ˜1700 AD, preserved in thick sequences of alternating fluvial conglomerates, massive mud and silt beds, and cohesive debris-flow deposits. These dated fan-marginal slackwater sediments indicate pronounced sediment pulses in the wake of at least three large medieval earthquakes in ˜1100, 1255, and 1344 AD. We combine these dates with digital elevation models, geological maps, differential GPS data, and sediment logs to estimate the extent of these three pulses that are characterized by sedimentation rates of ˜200 mm yr-1 and peak rates as high as 1,000 mm yr-1. Some 5.5 to 9 km3 of material infilled the pre-existing topography, and is now prone to ongoing fluvial dissection along major canyons. Contemporary river incision into the Pokhara Formation is rapid (120-170 mm yr-1), triggering widespread bank erosion, channel changes, and very high sediment yields of the order of 103 to 105 t km-2 yr-1, that by far outweigh bedrock denudation rates
THE GREAT SOUTHERN CALIFORNIA SHAKEOUT: Earthquake Science for 22 Million People
Jones, L.; Cox, D.; Perry, S.; Hudnut, K.; Benthien, M.; Bwarie, J.; Vinci, M.; Buchanan, M.; Long, K.; Sinha, S.; Collins, L.
2008-12-01
Earthquake science is being communicated to and used by the 22 million residents of southern California to improve resiliency to future earthquakes through the Great Southern California ShakeOut. The ShakeOut began when the USGS partnered with the California Geological Survey, Southern California Earthquake Center and many other organizations to bring 300 scientists and engineers together to formulate a comprehensive description of a plausible major earthquake, released in May 2008, as the ShakeOut Scenario, a description of the impacts and consequences of a M7.8 earthquake on the Southern San Andreas Fault (USGS OFR2008-1150). The Great Southern California ShakeOut was a week of special events featuring the largest earthquake drill in United States history. The ShakeOut drill occurred in houses, businesses, and public spaces throughout southern California at 10AM on November 13, 2008, when southern Californians were asked to pretend that the M7.8 scenario earthquake had occurred and to practice actions that could reduce the impact on their lives. Residents, organizations, schools and businesses registered to participate in the drill through www.shakeout.org where they could get accessible information about the scenario earthquake and share ideas for better reparation. As of September 8, 2008, over 2.7 million confirmed participants had been registered. The primary message of the ShakeOut is that what we do now, before a big earthquake, will determine what our lives will be like after. The goal of the ShakeOut has been to change the culture of earthquake preparedness in southern California, making earthquakes a reality that are regularly discussed. This implements the sociological finding that 'milling,' discussing a problem with loved ones, is a prerequisite to taking action. ShakeOut milling is taking place at all levels from individuals and families, to corporations and governments. Actions taken as a result of the ShakeOut include the adoption of earthquake
The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake
Gao, Meng-Tan; Jin, Xue-Shen; An, Wei-Ping; Lü, Xiao-Jian
2004-07-01
The geography information system of the 1303 Hongton M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studies. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.
Earthquake data base for Romania
International Nuclear Information System (INIS)
Rizescu, M.; Ghica, D.; Grecu, B.; Popa, M.; Borcia, I. S.
2002-01-01
A new earthquake database for Romania is being constructed, comprising complete earthquake information and being up-to-date, user-friendly and rapidly accessible. One main component of the database consists from the catalog of earthquakes occurred in Romania since 984 up to present. The catalog contains information related to locations and other source parameters, when available, and links to waveforms of important earthquakes. The other very important component is the 'strong motion database', developed for strong intermediate-depth Vrancea earthquakes where instrumental data were recorded. Different parameters to characterize strong motion properties as: effective peak acceleration, effective peak velocity, corner periods T c and T d , global response spectrum based intensities were computed and recorded into this database. Also, information on the recording seismic stations as: maps giving their positioning, photographs of the instruments and site conditions ('free-field or on buildings) are included. By the huge volume and quality of gathered data, also by its friendly user interface, the Romania earthquake data base provides a very useful tool for geosciences and civil engineering in their effort towards reducing seismic risk in Romania. (authors)
Mapping Tectonic Stress Using Earthquakes
International Nuclear Information System (INIS)
Arnold, Richard; Townend, John; Vignaux, Tony
2005-01-01
An earthquakes occurs when the forces acting on a fault overcome its intrinsic strength and cause it to slip abruptly. Understanding more specifically why earthquakes occur at particular locations and times is complicated because in many cases we do not know what these forces actually are, or indeed what processes ultimately trigger slip. The goal of this study is to develop, test, and implement a Bayesian method of reliably determining tectonic stresses using the most abundant stress gauges available - earthquakes themselves.Existing algorithms produce reasonable estimates of the principal stress directions, but yield unreliable error bounds as a consequence of the generally weak constraint on stress imposed by any single earthquake, observational errors, and an unavoidable ambiguity between the fault normal and the slip vector.A statistical treatment of the problem can take into account observational errors, combine data from multiple earthquakes in a consistent manner, and provide realistic error bounds on the estimated principal stress directions.We have developed a realistic physical framework for modelling multiple earthquakes and show how the strong physical and geometrical constraints present in this problem allow inference to be made about the orientation of the principal axes of stress in the earth's crust
Swedish earthquakes and acceleration probabilities
International Nuclear Information System (INIS)
Slunga, R.
1979-03-01
A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)
Building with Earthquakes in Mind
Mangieri, Nicholas
2016-04-01
Earthquakes are some of the most elusive and destructive disasters humans interact with on this planet. Engineering structures to withstand earthquake shaking is critical to ensure minimal loss of life and property. However, the majority of buildings today in non-traditional earthquake prone areas are not built to withstand this devastating force. Understanding basic earthquake engineering principles and the effect of limited resources helps students grasp the challenge that lies ahead. The solution can be found in retrofitting existing buildings with proper reinforcements and designs to deal with this deadly disaster. The students were challenged in this project to construct a basic structure, using limited resources, that could withstand a simulated tremor through the use of an earthquake shake table. Groups of students had to work together to creatively manage their resources and ideas to design the most feasible and realistic type of building. This activity provided a wealth of opportunities for the students to learn more about a type of disaster they do not experience in this part of the country. Due to the fact that most buildings in New York City were not designed to withstand earthquake shaking, the students were able to gain an appreciation for how difficult it would be to prepare every structure in the city for this type of event.
Large earthquakes and creeping faults
Harris, Ruth A.
2017-01-01
Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.
Earthquake damage to underground facilities
International Nuclear Information System (INIS)
Pratt, H.R.; Hustrulid, W.A.; Stephenson, D.E.
1978-11-01
The potential seismic risk for an underground nuclear waste repository will be one of the considerations in evaluating its ultimate location. However, the risk to subsurface facilities cannot be judged by applying intensity ratings derived from the surface effects of an earthquake. A literature review and analysis were performed to document the damage and non-damage due to earthquakes to underground facilities. Damage from earthquakes to tunnels, s, and wells and damage (rock bursts) from mining operations were investigated. Damage from documented nuclear events was also included in the study where applicable. There are very few data on damage in the subsurface due to earthquakes. This fact itself attests to the lessened effect of earthquakes in the subsurface because mines exist in areas where strong earthquakes have done extensive surface damage. More damage is reported in shallow tunnels near the surface than in deep mines. In mines and tunnels, large displacements occur primarily along pre-existing faults and fractures or at the surface entrance to these facilities.Data indicate vertical structures such as wells and shafts are less susceptible to damage than surface facilities. More analysis is required before seismic criteria can be formulated for the siting of a nuclear waste repository
Global earthquake fatalities and population
Holzer, Thomas L.; Savage, James C.
2013-01-01
Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.
Triggered creep as a possible mechanism for delayed dynamic triggering of tremor and earthquakes
Shelly, David R.; Peng, Zhigang; Hill, David P.; Aiken, Chastity
2011-01-01
The passage of radiating seismic waves generates transient stresses in the Earth's crust that can trigger slip on faults far away from the original earthquake source. The triggered fault slip is detectable in the form of earthquakes and seismic tremor. However, the significance of these triggered events remains controversial, in part because they often occur with some delay, long after the triggering stress has passed. Here we scrutinize the location and timing of tremor on the San Andreas fault between 2001 and 2010 in relation to distant earthquakes. We observe tremor on the San Andreas fault that is initiated by passing seismic waves, yet migrates along the fault at a much slower velocity than the radiating seismic waves. We suggest that the migrating tremor records triggered slow slip of the San Andreas fault as a propagating creep event. We find that the triggered tremor and fault creep can be initiated by distant earthquakes as small as magnitude 5.4 and can persist for several days after the seismic waves have passed. Our observations of prolonged tremor activity provide a clear example of the delayed dynamic triggering of seismic events. Fault creep has been shown to trigger earthquakes, and we therefore suggest that the dynamic triggering of prolonged fault creep could provide a mechanism for the delayed triggering of earthquakes. ?? 2011 Macmillan Publishers Limited. All rights reserved.
Adaptive regularization of noisy linear inverse problems
DEFF Research Database (Denmark)
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Higher derivative regularization and chiral anomaly
International Nuclear Information System (INIS)
Nagahama, Yoshinori.
1985-02-01
A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)
Regularity effect in prospective memory during aging
Directory of Open Access Journals (Sweden)
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Regularity effect in prospective memory during aging
Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique
2016-01-01
Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...
Twitter earthquake detection: earthquake monitoring in a social world
Directory of Open Access Journals (Sweden)
Daniel C. Bowden
2011-06-01
Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.
Measurand transient signal suppressor
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
Transient regional osteoporosis
Directory of Open Access Journals (Sweden)
F. Trotta
2011-09-01
Full Text Available Transient osteoporosis of the hip and regional migratory osteoporosis are uncommon and probably underdiagnosed bone diseases characterized by pain and functional limitation mainly affecting weight-bearing joints of the lower limbs. These conditions are usually self-limiting and symptoms tend to abate within a few months without sequelae. Routine laboratory investigations are unremarkable. Middle aged men and women during the last months of pregnancy or in the immediate post-partum period are principally affected. Osteopenia with preservation of articular space and transitory edema of the bone marrow provided by magnetic resonance imaging are common to these two conditions, so they are also known by the term regional transitory osteoporosis. The appearance of bone marrow edema is not specific to regional transitory osteoporosis but can be observed in several diseases, i.e. trauma, reflex sympathetic dystrophy, avascular osteonecrosis, infections, tumors from which it must be differentiated. The etiology of this condition is unknown. Pathogenesis is still debated in particular the relationship with reflex sympathetic dystrophy, with which regional transitory osteoporosis is often identified. The purpose of the present review is to remark on the relationship between transient osteoporosis of the hip and regional migratory osteoporosis with particular attention to the bone marrow edema pattern and relative differential diagnosis.
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Iterative Regularization with Minimum-Residual Methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
DEFF Research Database (Denmark)
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....
Evidence for Ancient Mesoamerican Earthquakes
Kovach, R. L.; Garcia, B.
2001-12-01
Evidence for past earthquake damage at Mesoamerican ruins is often overlooked because of the invasive effects of tropical vegetation and is usually not considered as a casual factor when restoration and reconstruction of many archaeological sites are undertaken. Yet the proximity of many ruins to zones of seismic activity would argue otherwise. Clues as to the types of damage which should be soughtwere offered in September 1999 when the M = 7.5 Oaxaca earthquake struck the ruins of Monte Alban, Mexico, where archaeological renovations were underway. More than 20 structures were damaged, 5 of them seriously. Damage features noted were walls out of plumb, fractures in walls, floors, basal platforms and tableros, toppling of columns, and deformation, settling and tumbling of walls. A Modified Mercalli Intensity of VII (ground accelerations 18-34 %b) occurred at the site. Within the diffuse landward extension of the Caribbean plate boundary zone M = 7+ earthquakes occur with repeat times of hundreds of years arguing that many Maya sites were subjected to earthquakes. Damage to re-erected and reinforced stelae, walls, and buildings were witnessed at Quirigua, Guatemala, during an expedition underway when then 1976 M = 7.5 Guatemala earthquake on the Motagua fault struck. Excavations also revealed evidence (domestic pttery vessels and skeleton of a child crushed under fallen walls) of an ancient earthquake occurring about the teim of the demise and abandonment of Quirigua in the late 9th century. Striking evidence for sudden earthquake building collapse at the end of the Mayan Classic Period ~A.D. 889 was found at Benque Viejo (Xunantunich), Belize, located 210 north of Quirigua. It is argued that a M = 7.5 to 7.9 earthquake at the end of the Maya Classic period centered in the vicinity of the Chixoy-Polochic and Motagua fault zones cound have produced the contemporaneous earthquake damage to the above sites. As a consequences this earthquake may have accelerated the
Comparison of two large earthquakes: the 2008 Sichuan Earthquake and the 2011 East Japan Earthquake.
Otani, Yuki; Ando, Takayuki; Atobe, Kaori; Haiden, Akina; Kao, Sheng-Yuan; Saito, Kohei; Shimanuki, Marie; Yoshimoto, Norifumi; Fukunaga, Koichi
2012-01-01
Between August 15th and 19th, 2011, eight 5th-year medical students from the Keio University School of Medicine had the opportunity to visit the Peking University School of Medicine and hold a discussion session titled "What is the most effective way to educate people for survival in an acute disaster situation (before the mental health care stage)?" During the session, we discussed the following six points: basic information regarding the Sichuan Earthquake and the East Japan Earthquake, differences in preparedness for earthquakes, government actions, acceptance of medical rescue teams, earthquake-induced secondary effects, and media restrictions. Although comparison of the two earthquakes was not simple, we concluded that three major points should be emphasized to facilitate the most effective course of disaster planning and action. First, all relevant agencies should formulate emergency plans and should supply information regarding the emergency to the general public and health professionals on a normal basis. Second, each citizen should be educated and trained in how to minimize the risks from earthquake-induced secondary effects. Finally, the central government should establish a single headquarters responsible for command, control, and coordination during a natural disaster emergency and should centralize all powers in this single authority. We hope this discussion may be of some use in future natural disasters in China, Japan, and worldwide.
VO2 OFF TRANSIENT KINETICS IN EXTREME INTENSITY SWIMMING
Directory of Open Access Journals (Sweden)
Ana Sousa
2011-09-01
Full Text Available Inconsistencies about dynamic asymmetry between the on- and off- transient responses in oxygen uptake are found in the literature. Therefore, the purpose of this study was to characterize the oxygen uptake off-transient kinetics during a maximal 200-m front crawl effort, as examining the degree to which the on/off regularity of the oxygen uptake kinetics response was preserved. Eight high level male swimmers performed a 200-m front crawl at maximal speed during which oxygen uptake was directly measured through breath-by-breath oxymetry (averaged every 5 s. This apparatus was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. Results: The on- and off-transient phases were symmetrical in shape (mirror image once they were adequately fitted by a single-exponential regression models, and no slow component for the oxygen uptake response was developed. Mean (± SD peak oxygen uptake was 69.0 (± 6.3 mL·kg-1·min-1, significantly correlated with time constant of the off- transient period (r = 0.76, p < 0.05 but not with any of the other oxygen off-transient kinetic parameters studied. A direct relationship between time constant of the off-transient period and mean swimming speed of the 200-m (r = 0.77, p < 0.05, and with the amplitude of the fast component of the effort period (r = 0.72, p < 0.05 were observed. The mean amplitude and time constant of the off-transient period values were significantly greater than the respective on- transient. In conclusion, although an asymmetry between the on- and off kinetic parameters was verified, both the 200-m effort and the respectively recovery period were better characterized by a single exponential regression model
Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)
Jordan, T. H.
2010-12-01
timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.
Do earthquakes exhibit self-organized criticality?
International Nuclear Information System (INIS)
Yang Xiaosong; Ma Jin; Du Shuming
2004-01-01
If earthquakes are phenomena of self-organized criticality (SOC), statistical characteristics of the earthquake time series should be invariant after the sequence of events in an earthquake catalog are randomly rearranged. In this Letter we argue that earthquakes are unlikely phenomena of SOC because our analysis of the Southern California Earthquake Catalog shows that the first-return-time probability P M (T) is apparently changed after the time series is rearranged. This suggests that the SOC theory should not be used to oppose the efforts of earthquake prediction
Circuit breaker operation and potential failure modes during an earthquake
International Nuclear Information System (INIS)
Lambert, H.E.; Budnitz, R.J.
1987-01-01
This study addresses the effect of a strong-motion earthquake on circuit breaker operation. It focuses on the loss of offsite power (LOSP) transient caused by a strong-motion earthquake at the Zion Nuclear Power Plant. This paper also describes the operator action necessary to prevent core melt if the above circuit breaker failure modes occur simultaneously on three 4.16 KV buses. Numerous circuit breakers important to plant safety, such as circuit breakers to diesel generators and engineered safety systems (ESS), must open and/or close during this transient while strong motion is occurring. Potential seismically-induced circuit-breaker failures modes were uncovered while the study was conducted. These failure modes include: circuit breaker fails to close; circuit breaker trips inadvertently; circuit breaker fails to reclose after trip. The causes of these failure modes include: Relay chatter causes the circuit breaker to trip; Relay chatter causes anti-pumping relays to seal-in which prevents automatic closure of circuit breakers; Load sequencer failures. The incorporation of these failure modes as well as other instrumentation and control failures into a limited scope seismic probabilistic risk assessment is also discussed in this paper
Earthquake, GIS and multimedia. The 1883 Casamicciola earthquake
Directory of Open Access Journals (Sweden)
M. Rebuffat
1995-06-01
Full Text Available A series of multimedia monographs concerning the main seismic events that have affected the Italian territory are in the process of being produced for the Documental Integrated Multimedia Project (DIMP started by the Italian National Seismic Survey (NSS. The purpose of the project is to reconstruct the historical record of earthquakes and promote an earthquake public education. Producing the monographs. developed in ARC INFO and working in UNIX. involved designing a special filing and management methodology to integrate heterogeneous information (images, papers, cartographies, etc.. This paper describes the possibilities of a GIS (Geographic Information System in the filing and management of documental information. As an example we present the first monograph on the 1883 Casamicciola earthquake. on the island of Ischia (Campania, Italy. This earthquake is particularly interesting for the following reasons: I historical-cultural context (first destructive seismic event after the unification of Italy; 2 its features (volcanic earthquake; 3 the socioeconomic consequences caused at such an important seaside resort.
Extreme value statistics and thermodynamics of earthquakes: large earthquakes
Directory of Open Access Journals (Sweden)
B. H. Lavenda
2000-06-01
Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.
Energy Technology Data Exchange (ETDEWEB)
Tél, Tamás [Institute for Theoretical Physics, Eötvös University, and MTA-ELTE Theoretical Physics Research Group, Pázmány P. s. 1/A, Budapest H-1117 (Hungary)
2015-09-15
We intend to show that transient chaos is a very appealing, but still not widely appreciated, subfield of nonlinear dynamics. Besides flashing its basic properties and giving a brief overview of the many applications, a few recent transient-chaos-related subjects are introduced in some detail. These include the dynamics of decision making, dispersion, and sedimentation of volcanic ash, doubly transient chaos of undriven autonomous mechanical systems, and a dynamical systems approach to energy absorption or explosion.
Transient osteoporosis of the hip
International Nuclear Information System (INIS)
McWalter, Patricia; Hassan Ahmed
2007-01-01
Transient osteoporosis of the hip is an uncommon cause of hip pain, mostly affecting healthy middle-aged men and also women in the third trimester of pregnancy. We present a case of transient osteoporosis of the hip in a 33-year-old non-pregnant female patient. This case highlights the importance of considering a diagnosis of transient osteoporosis of the hip in patients who present with hip pain. (author)
The ZTF Bright Transient Survey
Fremling, C.; Sharma, Y.; Kulkarni, S. R.; Miller, A. A.; Taggart, K.; Perley, D. A.; Gooba, A.
2018-06-01
As a supplement to the Zwicky Transient Facility (ZTF; ATel #11266) public alerts (ATel #11685) we plan to report (following ATel #11615) bright probable supernovae identified in the raw alert stream from the ZTF Northern Sky Survey ("Celestial Cinematography"; see Bellm & Kulkarni, 2017, Nature Astronomy 1, 71) to the Transient Name Server (https://wis-tns.weizmann.ac.il) on a daily basis; the ZTF Bright Transient Survey (BTS; see Kulkarni et al., 2018; arXiv:1710.04223).
Tél, Tamás
2015-09-01
We intend to show that transient chaos is a very appealing, but still not widely appreciated, subfield of nonlinear dynamics. Besides flashing its basic properties and giving a brief overview of the many applications, a few recent transient-chaos-related subjects are introduced in some detail. These include the dynamics of decision making, dispersion, and sedimentation of volcanic ash, doubly transient chaos of undriven autonomous mechanical systems, and a dynamical systems approach to energy absorption or explosion.
Transient Infrared Emission Spectroscopy
Jones, Roger W.; McClelland, John F.
1989-12-01
Transient Infrared Emission Spectroscopy (TIRES) is a new technique that reduces the occurrence of self-absorption in optically thick solid samples so that analytically useful emission spectra may be observed. Conventional emission spectroscopy, in which the sample is held at an elevated, uniform temperature, is practical only for optically thin samples. In thick samples the emission from deep layers of the material is partially absorbed by overlying layers.1 This self-absorption results in emission spectra from most optically thick samples that closely resemble black-body spectra. The characteristic discrete emission bands are severely truncated and altered in shape. TIRES bypasses this difficulty by using a laser to heat only an optically thin surface layer. The increased temperature of the layer is transient since the layer will rapidly cool and thicken by thermal diffusion; hence the emission collection must be correlated with the laser heating. TIRES may be done with both pulsed and cw lasers.2,3 When a pulsed laser is used, the spectrometer sampling must be synchronized with the laser pulsing so that only emission during and immediately after each laser pulse is observed.3 If a cw laser is used, the sample must move rapidly through the beam. The hot, transient layer is then in the beam track on the sample at and immediately behind the beam position, so the spectrometer field of view must be limited to this region near the beam position.2 How much self-absorption the observed emission suffers depends on how thick the heated layer has grown by thermal diffusion when the spectrometer samples the emission. Use of a pulsed laser synchronized with the spectrometer sampling readily permits reduction of the time available for heat diffusion to about 100 acs .3 When a cw laser is used, the heat-diffusion time is controlled by how small the spectrometer field of view is and by how rapidly the sample moves past within this field. Both a very small field of view and a
Laboratory generated M -6 earthquakes
McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.
2014-01-01
We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.
Anomalous variation in GPS based TEC measurements prior to the 30 September 2009 Sumatra Earthquake
Karia, Sheetal; Pathak, Kamlesh
This paper investigates the features of pre-earthquake ionospheric anomalies in the total elec-tron content (TEC) data obtained on the basis of regular GPS observations from the GPS receiver at SVNIT Surat (21.16 N, 72.78 E Geog) located at the northern crest of equatorial anomaly region. The data has been analysed for 5 different earthquakes that occurred during 2009 in India and its neighbouring regions. Our observation shows that for the cases of the earthquake, in which the preparation area lies between the crests of the equatorial anomaly close to the geomagnetic equator the enhancement in TEC was followed by a depletion in TEC on the day of earthquake, which may be connected to the equatorial anomaly shape distortions. For the analysis of the ionospheric effects of one of such case-the 30 September 2009 Sumatra earthquake, Global Ionospheric Maps of TEC were used. The possible influence of the earth-quake preparation processes on the main low-latitude ionosphere peculiarity—the equatorial anomaly—is discussed.
Anticipated transients without scram
International Nuclear Information System (INIS)
Lellouche, G.S.
1980-01-01
This article discusses in various degrees of depth the publications WASH-1270, WASH-1400, and NUREG-0460, and has as its purpose a description of the technical work done by Electric Power Research Institute (EPRI) personnel and its contractors on the subject of anticipated transients without scram (ATWS). It demonstrates the close relation between the probability of scram failure derived from historical scram data and that derived from the use of component data in a model of a system (the so-called synthesis method), such as was done in WASH-1400. The inherent conservatism of these models is demonstrated by showing that they predict significantly more events than have in fact occurred and that such models still predict scram failure probabilities low enough to make ATWS an insignificant contributor to accident risk
International Nuclear Information System (INIS)
Roche, L.; Schmitz, F.
1982-10-01
The observation of micrographic documents from fuel after a CABRI test leads to postulate a specific mode of transient fuel melting during a rapid nuclear power excursion. When reaching the melt threshold, the bands which are characteristic for the solid state are broken statistically over a macroscopic region. The time of maintaining the fuel at the critical enthalpy level between solid and liquid is too short to lead to a phase separation. A significant life-time (approximately 1 second) of this intermediate ''unsolide'' state would have consequences on the variation of physical properties linked to the phase transition solid/liquid: viscosity, specific volume and (for the irradiated fuel) fission gas release [fr
Transient osteoporosis of pregnancy.
Maliha, George; Morgan, Jordan; Vrahas, Mark
2012-08-01
Transient osteoporosis of pregnancy (TOP) is a rare yet perhaps under-reported condition that has affected otherwise healthy pregnancies throughout the world. The condition presents suddenly in the third trimester of a usually uneventful pregnancy and progressively immobilizes the mother. Radiographic studies detect drastic loss of bone mass, elevated rates of turnover in the bone, and oedema in the affected portion. Weakness of the bone can lead to fractures during delivery and other complications for the mother. Then, within weeks of labour, symptoms and radiological findings resolve. Aetiology is currently unknown, although neural, vascular, haematological, endocrine, nutrient-deficiency, and other etiologies have been proposed. Several treatments have also been explored, including simple bed rest, steroids, bisphosphonates, calcitonin, induced termination of pregnancy, and surgical intervention. The orthopedist plays an essential role in monitoring the condition (and potential complications) as well as ensuring satisfactory outcomes for both the mother and newborn. Copyright © 2012 Elsevier Ltd. All rights reserved.
A regularized stationary mean-field game
Yang, Xianjin
2016-01-01
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
On infinite regular and chiral maps
Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán
2015-01-01
We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.
From recreational to regular drug use
DEFF Research Database (Denmark)
Järvinen, Margaretha; Ravn, Signe
2011-01-01
This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...
Automating InDesign with Regular Expressions
Kahrel, Peter
2006-01-01
If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
2010-07-01
... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...
The music of earthquakes and Earthquake Quartet #1
Michael, Andrew J.
2013-01-01
Earthquake Quartet #1, my composition for voice, trombone, cello, and seismograms, is the intersection of listening to earthquakes as a seismologist and performing music as a trombonist. Along the way, I realized there is a close relationship between what I do as a scientist and what I do as a musician. A musician controls the source of the sound and the path it travels through their instrument in order to make sound waves that we hear as music. An earthquake is the source of waves that travel along a path through the earth until reaching us as shaking. It is almost as if the earth is a musician and people, including seismologists, are metaphorically listening and trying to understand what the music means.
Safety And Transient Analyses For Full Core Conversion Of The Dalat Nuclear Research Reactor
International Nuclear Information System (INIS)
Luong Ba Vien; Le Vinh Vinh; Huynh Ton Nghiem; Nguyen Kien Cuong
2011-01-01
Preparing for full core conversion of Dalat Nuclear Research Reactor (DNRR), safety and transient analyses were carried out to confirm about ability to operate safely of proposed Low Enriched Uranium (LEU) working core. The initial LEU core consisting 92 LEU fuel assemblies and 12 Beryllium rods was analyzed under initiating events of uncontrolled withdrawal of a control rod, cooling pump failure, earthquake and fuel cladding fail. Working LEU core response were evaluated under these initial events based on RELAP/Mod3.2 computer code and other supported codes like ORIGEN, MCNP and MACCS2. Obtained results showed that safety of the reactor is maintained for all transients/accidents analyzed. (author)
Toward real-time regional earthquake simulation of Taiwan earthquakes
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Book review: Earthquakes and water
Bekins, Barbara A.
2012-01-01
It is really nice to see assembled in one place a discussion of the documented and hypothesized hydrologic effects of earthquakes. The book is divided into chapters focusing on particular hydrologic phenomena including liquefaction, mud volcanism, stream discharge increases, groundwater level, temperature and chemical changes, and geyser period changes. These hydrologic effects are inherently fascinating, and the large number of relevant publications in the past decade makes this summary a useful milepost. The book also covers hydrologic precursors and earthquake triggering by pore pressure. A natural need to limit the topics covered resulted in the omission of tsunamis and the vast literature on the role of fluids and pore pressure in frictional strength of faults. Regardless of whether research on earthquake-triggered hydrologic effects ultimately provides insight into the physics of earthquakes, the text provides welcome common ground for interdisciplinary collaborations between hydrologists and seismologists. Such collaborations continue to be crucial for investigating hypotheses about the role of fluids in earthquakes and slow slip.
Transient Go: A Mobile App for Transient Astronomy Outreach
Crichton, D.; Mahabal, A.; Djorgovski, S. G.; Drake, A.; Early, J.; Ivezic, Z.; Jacoby, S.; Kanbur, S.
2016-12-01
Augmented Reality (AR) is set to revolutionize human interaction with the real world as demonstrated by the phenomenal success of `Pokemon Go'. That very technology can be used to rekindle the interest in science at the school level. We are in the process of developing a prototype app based on sky maps that will use AR to introduce different classes of astronomical transients to students as they are discovered i.e. in real-time. This will involve transient streams from surveys such as the Catalina Real-time Transient Survey (CRTS) today and the Large Synoptic Survey Telescope (LSST) in the near future. The transient streams will be combined with archival and latest image cut-outs and other auxiliary data as well as historical and statistical perspectives on each of the transient types being served. Such an app could easily be adapted to work with various NASA missions and NSF projects to enrich the student experience.
Institute of Scientific and Technical Information of China (English)
Qin Chengzhi; Zhou Chenghu; Pei Tao; Li Quanlin
2004-01-01
The migration of strong earthquakes is an important research topic because the migration phenomena reflect partly the seismic mechanism and involve the prediction of tendency of seismic activity. Research on migration of strong earthquakes has mostly focused on finding the phenomena. Some attempts on getting regularity were comparatively subjective. This paper suggests that there should be indices of migration in earthquake dataset and the indexes should have statistical meaning if there is regularity in the migration of strong earthquakes. In this study, three derivative attributes of migration, i.e., migration orientation, migration distance and migration time interval, were statistically analyzed. Results in the North China region show that the migration of strong earthquakes has statistical meaning. There is a dominant migration orientation (W by S to E by N), a dominant distance ( ≤ 100km and on the confines of 300 ～ 700km), and a dominant time interval ( ≤ 1 a and on the confines of 3 ～ 4a). The results also show that the migration will differ slightly with different magnitude range or earthquake activity phase.
Global Earthquake Hazard Frequency and Distribution
National Aeronautics and Space Administration — Global Earthquake Hazard Frequency and Distribution is a 2.5 minute grid utilizing Advanced National Seismic System (ANSS) Earthquake Catalog data of actual...
Unbonded Prestressed Columns for Earthquake Resistance
2012-05-01
Modern structures are able to survive significant shaking caused by earthquakes. By implementing unbonded post-tensioned tendons in bridge columns, the damage caused by an earthquake can be significantly lower than that of a standard reinforced concr...
Extreme value distribution of earthquake magnitude
Zi, Jun Gan; Tung, C. C.
1983-07-01
Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.
Directory of Open Access Journals (Sweden)
Renata Mota Mamede de Carvallo
2008-09-01
Full Text Available Objective: The aim of the present investigation was to check Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests applied together in regular nurseries and Newborn Intensive Care Units (NICU, as well as to describe and compare the results obtained in both groups. Methods: We tested 150 newborns from regular nurseries and 70 from NICU. Rresults: The newborn hearing screening results using Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response tests could be applied to all babies. The “pass” result for the group of babies from the nursery was 94.7% using Transient Evoked Otoacoustic Emissions and 96% using Automatic Auditory Brainstem Response. The newborn intensive care unit group obtained 87.1% on Transient Evoked Otoacoustic Emissions and 80% on the Automatic Auditory Brainstem Response, and there was no statistical difference between the procedures when the groups were evaluated individually. However, comparing the groups, Transient Evoked Otoacoustic Emissions were presented in 94.7% of the nursery babies and in 87.1% in the group from the newborn intensive care unit. Considering the Automatic Auditory Brainstem Response, we found 96 and 87%, respectively. Cconclusions: Transient Evoked Otoacoustic Emissions and Automatic Auditory Brainstem Response had similar “pass” and “fail” results when the procedures were applied to neonates from the regular nursery, and the combined tests were more precise to detect hearing impairment in the newborn intensive care unit babies.
An iterative method for Tikhonov regularization with a general linear regularization operator
Hochstenbach, M.E.; Reichel, L.
2010-01-01
Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan
PRECURSORS OF EARTHQUAKES: VLF SIGNALSIONOSPHERE IONOSPHERE RELATION
Directory of Open Access Journals (Sweden)
Mustafa ULAS
2013-01-01
Full Text Available lot of people have died because of earthquakes every year. Therefore It is crucial to predict the time of the earthquakes reasonable time before it had happed. This paper presents recent information published in the literature about precursors of earthquakes. The relationships between earthquakes and ionosphere are targeted to guide new researches in order to study further to find novel prediction methods.
EARTHQUAKE RESEARCH PROBLEMS OF NUCLEAR POWER GENERATORS
Energy Technology Data Exchange (ETDEWEB)
Housner, G. W.; Hudson, D. E.
1963-10-15
Earthquake problems associated with the construction of nuclear power generators require a more extensive and a more precise knowledge of earthquake characteristics and the dynamic behavior of structures than was considered necessary for ordinary buildings. Economic considerations indicate the desirability of additional research on the problems of earthquakes and nuclear reactors. The nature of these earthquake-resistant design problems is discussed and programs of research are recommended. (auth)
Fault geometry and earthquake mechanics
Directory of Open Access Journals (Sweden)
D. J. Andrews
1994-06-01
Full Text Available Earthquake mechanics may be determined by the geometry of a fault system. Slip on a fractal branching fault surface can explain: 1 regeneration of stress irregularities in an earthquake; 2 the concentration of stress drop in an earthquake into asperities; 3 starting and stopping of earthquake slip at fault junctions, and 4 self-similar scaling of earthquakes. Slip at fault junctions provides a natural realization of barrier and asperity models without appealing to variations of fault strength. Fault systems are observed to have a branching fractal structure, and slip may occur at many fault junctions in an earthquake. Consider the mechanics of slip at one fault junction. In order to avoid a stress singularity of order 1/r, an intersection of faults must be a triple junction and the Burgers vectors on the three fault segments at the junction must sum to zero. In other words, to lowest order the deformation consists of rigid block displacement, which ensures that the local stress due to the dislocations is zero. The elastic dislocation solution, however, ignores the fact that the configuration of the blocks changes at the scale of the displacement. A volume change occurs at the junction; either a void opens or intense local deformation is required to avoid material overlap. The volume change is proportional to the product of the slip increment and the total slip since the formation of the junction. Energy absorbed at the junction, equal to confining pressure times the volume change, is not large enongh to prevent slip at a new junction. The ratio of energy absorbed at a new junction to elastic energy released in an earthquake is no larger than P/µ where P is confining pressure and µ is the shear modulus. At a depth of 10 km this dimensionless ratio has th value P/µ= 0.01. As slip accumulates at a fault junction in a number of earthquakes, the fault segments are displaced such that they no longer meet at a single point. For this reason the
Historical earthquake investigations in Greece
Directory of Open Access Journals (Sweden)
K. Makropoulos
2004-06-01
Full Text Available The active tectonics of the area of Greece and its seismic activity have always been present in the country?s history. Many researchers, tempted to work on Greek historical earthquakes, have realized that this is a task not easily fulfilled. The existing catalogues of strong historical earthquakes are useful tools to perform general SHA studies. However, a variety of supporting datasets, non-uniformly distributed in space and time, need to be further investigated. In the present paper, a review of historical earthquake studies in Greece is attempted. The seismic history of the country is divided into four main periods. In each one of them, characteristic examples, studies and approaches are presented.
Pressure transients in pipeline systems
DEFF Research Database (Denmark)
Voigt, Kristian
1998-01-01
This text is to give an overview of the necessary background to do investigation of pressure transients via simulations. It will describe briefly the Method of Characteristics which is the defacto standard for simulating pressure transients. Much of the text has been adopted from the book Pressur...
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-01-01
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Hierarchical regular small-world networks
International Nuclear Information System (INIS)
Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan
2008-01-01
Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Coupling regularizes individual units in noisy populations
International Nuclear Information System (INIS)
Ly Cheng; Ermentrout, G. Bard
2010-01-01
The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.
Multiple graph regularized protein domain ranking
Directory of Open Access Journals (Sweden)
Wang Jim
2012-11-01
Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Fault failure with moderate earthquakes
Johnston, M. J. S.; Linde, A. T.; Gladwin, M. T.; Borcherdt, R. D.
1987-12-01
High resolution strain and tilt recordings were made in the near-field of, and prior to, the May 1983 Coalinga earthquake ( ML = 6.7, Δ = 51 km), the August 4, 1985, Kettleman Hills earthquake ( ML = 5.5, Δ = 34 km), the April 1984 Morgan Hill earthquake ( ML = 6.1, Δ = 55 km), the November 1984 Round Valley earthquake ( ML = 5.8, Δ = 54 km), the January 14, 1978, Izu, Japan earthquake ( ML = 7.0, Δ = 28 km), and several other smaller magnitude earthquakes. These recordings were made with near-surface instruments (resolution 10 -8), with borehole dilatometers (resolution 10 -10) and a 3-component borehole strainmeter (resolution 10 -9). While observed coseismic offsets are generally in good agreement with expectations from elastic dislocation theory, and while post-seismic deformation continued, in some cases, with a moment comparable to that of the main shock, preseismic strain or tilt perturbations from hours to seconds (or less) before the main shock are not apparent above the present resolution. Precursory slip for these events, if any occurred, must have had a moment less than a few percent of that of the main event. To the extent that these records reflect general fault behavior, the strong constraint on the size and amount of slip triggering major rupture makes prediction of the onset times and final magnitudes of the rupture zones a difficult task unless the instruments are fortuitously installed near the rupture initiation point. These data are best explained by an inhomogeneous failure model for which various areas of the fault plane have either different stress-slip constitutive laws or spatially varying constitutive parameters. Other work on seismic waveform analysis and synthetic waveforms indicates that the rupturing process is inhomogeneous and controlled by points of higher strength. These models indicate that rupture initiation occurs at smaller regions of higher strength which, when broken, allow runaway catastrophic failure.
Modeling, Forecasting and Mitigating Extreme Earthquakes
Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.
2012-12-01
Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).
13 CFR 120.174 - Earthquake hazards.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Earthquake hazards. 120.174... Applying to All Business Loans Requirements Imposed Under Other Laws and Orders § 120.174 Earthquake..., the construction must conform with the “National Earthquake Hazards Reduction Program (“NEHRP...
Computational methods in earthquake engineering
Plevris, Vagelis; Lagaros, Nikos
2017-01-01
This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .
Earthquake Education in Prime Time
de Groot, R.; Abbott, P.; Benthien, M.
2004-12-01
Since 2001, the Southern California Earthquake Center (SCEC) has collaborated on several video production projects that feature important topics related to earthquake science, engineering, and preparedness. These projects have also fostered many fruitful and sustained partnerships with a variety of organizations that have a stake in hazard education and preparedness. The Seismic Sleuths educational video first appeared in the spring season 2001 on Discovery Channel's Assignment Discovery. Seismic Sleuths is based on a highly successful curriculum package developed jointly by the American Geophysical Union and The Department of Homeland Security Federal Emergency Management Agency. The California Earthquake Authority (CEA) and the Institute for Business and Home Safety supported the video project. Summer Productions, a company with a reputation for quality science programming, produced the Seismic Sleuths program in close partnership with scientists, engineers, and preparedness experts. The program has aired on the National Geographic Channel as recently as Fall 2004. Currently, SCEC is collaborating with Pat Abbott, a geology professor at San Diego State University (SDSU) on the video project Written In Stone: Earthquake Country - Los Angeles. Partners on this project include the California Seismic Safety Commission, SDSU, SCEC, CEA, and the Insurance Information Network of California. This video incorporates live-action demonstrations, vivid animations, and a compelling host (Abbott) to tell the story about earthquakes in the Los Angeles region. The Written in Stone team has also developed a comprehensive educator package that includes the video, maps, lesson plans, and other supporting materials. We will present the process that facilitates the creation of visually effective, factually accurate, and entertaining video programs. We acknowledge the need to have a broad understanding of the literature related to communication, media studies, science education, and
Radon as an earthquake precursor
International Nuclear Information System (INIS)
Planinic, J.; Radolic, V.; Vukovic, B.
2004-01-01
Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors during a four-year period. Seismic activities, as well as barometric pressure, rainfall and air temperature were also observed. The influence of meteorological parameters on temporal radon variations was investigated, and a respective equation of the multiple regression was derived. The earthquakes with magnitude ≥3 at epicentral distances ≤200 km were recognized by means of radon anomaly. Empirical equations between earthquake magnitude, epicentral distance and precursor time were examined, and respective constants were determined
Radon as an earthquake precursor
Energy Technology Data Exchange (ETDEWEB)
Planinic, J. E-mail: planinic@pedos.hr; Radolic, V.; Vukovic, B
2004-09-11
Radon concentrations in soil gas were continuously measured by the LR-115 nuclear track detectors during a four-year period. Seismic activities, as well as barometric pressure, rainfall and air temperature were also observed. The influence of meteorological parameters on temporal radon variations was investigated, and a respective equation of the multiple regression was derived. The earthquakes with magnitude {>=}3 at epicentral distances {<=}200 km were recognized by means of radon anomaly. Empirical equations between earthquake magnitude, epicentral distance and precursor time were examined, and respective constants were determined.
Earthquake location in island arcs
Engdahl, E.R.; Dewey, J.W.; Fujita, K.
1982-01-01
A comprehensive data set of selected teleseismic P-wave arrivals and local-network P- and S-wave arrivals from large earthquakes occurring at all depths within a small section of the central Aleutians is used to examine the general problem of earthquake location in island arcs. Reference hypocenters for this special data set are determined for shallow earthquakes from local-network data and for deep earthquakes from combined local and teleseismic data by joint inversion for structure and location. The high-velocity lithospheric slab beneath the central Aleutians may displace hypocenters that are located using spherically symmetric Earth models; the amount of displacement depends on the position of the earthquakes with respect to the slab and on whether local or teleseismic data are used to locate the earthquakes. Hypocenters for trench and intermediate-depth events appear to be minimally biased by the effects of slab structure on rays to teleseismic stations. However, locations of intermediate-depth events based on only local data are systematically displaced southwards, the magnitude of the displacement being proportional to depth. Shallow-focus events along the main thrust zone, although well located using only local-network data, are severely shifted northwards and deeper, with displacements as large as 50 km, by slab effects on teleseismic travel times. Hypocenters determined by a method that utilizes seismic ray tracing through a three-dimensional velocity model of the subduction zone, derived by thermal modeling, are compared to results obtained by the method of joint hypocenter determination (JHD) that formally assumes a laterally homogeneous velocity model over the source region and treats all raypath anomalies as constant station corrections to the travel-time curve. The ray-tracing method has the theoretical advantage that it accounts for variations in travel-time anomalies within a group of events distributed over a sizable region of a dipping, high
Diagrammatic methods in phase-space regularization
International Nuclear Information System (INIS)
Bern, Z.; Halpern, M.B.; California Univ., Berkeley
1987-11-01
Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
Dancing Earthquake Science Assists Recovery from the Christchurch Earthquakes
Egan, Candice J.; Quigley, Mark C.
2015-01-01
The 2010-2012 Christchurch (Canterbury) earthquakes in New Zealand caused loss of life and psychological distress in residents throughout the region. In 2011, student dancers of the Hagley Dance Company and dance professionals choreographed the performance "Move: A Seismic Journey" for the Christchurch Body Festival that explored…
Transient regional osteoporosis.
Cano-Marquina, Antonio; Tarín, Juan J; García-Pérez, Miguel-Ángel; Cano, Antonio
2014-04-01
Transient regional osteoporosis (TRO) is a disease that predisposes to fragility fracture in weight bearing joints of mid-life women and men. Pregnant women may also suffer the process, usually at the hip. The prevalence of TRO is lower than the systemic form, associated with postmenopause and advanced age, but may be falsely diminished by under-diagnosis. The disease may be uni- or bilateral, and may migrate to distinct joints. One main feature of TRO is spontaneous recovery. Pain and progressive limitation in the functionality of the affected joint(s) are key symptoms. In the case of the form associated with pregnancy, difficulties in diagnosis derive from the relatively young age at presentation and from the clinical overlapping with the frequent aches during gestation. Densitometric osteoporosis in the affected region is not always present, but bone marrow edema, with or without joint effusion, is detected by magnetic resonance. There are not treatment guidelines, but the association of antiresorptives to symptomatic treatment seems to be beneficial. Surgery or other orthopedic interventions can be required for specific indications, like hip fracture, intra-medullary decompression, or other. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chaudhry, M Hanif
2014-01-01
This book covers hydraulic transients in a comprehensive and systematic manner from introduction to advanced level and presents various methods of analysis for computer solution. The field of application of the book is very broad and diverse and covers areas such as hydroelectric projects, pumped storage schemes, water-supply systems, cooling-water systems, oil pipelines and industrial piping systems. Strong emphasis is given to practical applications, including several case studies, problems of applied nature, and design criteria. This will help design engineers and introduce students to real-life projects. This book also: · Presents modern methods of analysis suitable for computer analysis, such as the method of characteristics, explicit and implicit finite-difference methods and matrix methods · Includes case studies of actual projects · Provides extensive and complete treatment of governed hydraulic turbines · Presents design charts, desi...
International Nuclear Information System (INIS)
Men, Ke-Pei; Zhao, Kai
2014-01-01
M ≥ 7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a x k (k = 1, 2, 3), 11 ∝ 12 a, 41 ∝ 43 a, 18 ∝ 19 a, and 5 ∝ 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019-2020 and 2025-2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.
Directory of Open Access Journals (Sweden)
Huan-Feng Duan
2017-10-01
Full Text Available This paper investigates the impacts of non-uniformities of pipe diameter (i.e., an inhomogeneous cross-sectional area along pipelines on transient wave behavior and propagation in water supply pipelines. The multi-scale wave perturbation method is firstly used to derive analytical solutions for the amplitude evolution of transient pressure wave propagation in pipelines, considering regular and random variations of cross-sectional area, respectively. The analytical analysis is based on the one-dimensional (1D transient wave equation for pipe flow. Both derived results show that transient waves can be attenuated and scattered significantly along the longitudinal direction of the pipeline due to the regular and random non-uniformities of pipe diameter. The obtained analytical results are then validated by extensive 1D numerical simulations under different incident wave and non-uniform pipe conditions. The comparative results indicate that the derived analytical solutions are applicable and useful to describe the wave scattering effect in complex pipeline systems. Finally, the practical implications and influence of wave scattering effects on transient flow analysis and transient-based leak detection in urban water supply systems are discussed in the paper.
Earthquake Warning Performance in Vallejo for the South Napa Earthquake
Wurman, G.; Price, M.
2014-12-01
In 2002 and 2003, Seismic Warning Systems, Inc. installed first-generation QuakeGuardTM earthquake warning devices at all eight fire stations in Vallejo, CA. These devices are designed to detect the P-wave of an earthquake and initiate predetermined protective actions if the impending shaking is estimated at approximately Modifed Mercalli Intensity V or greater. At the Vallejo fire stations the devices were set up to sound an audio alert over the public address system and to command the equipment bay doors to open. In August 2014, after more than 11 years of operating in the fire stations with no false alarms, the five units that were still in use triggered correctly on the MW 6.0 South Napa earthquake, less than 16 km away. The audio alert sounded in all five stations, providing fire fighters with 1.5 to 2.5 seconds of warning before the arrival of the S-wave, and the equipment bay doors opened in three of the stations. In one station the doors were disconnected from the QuakeGuard device, and another station lost power before the doors opened completely. These problems highlight just a small portion of the complexity associated with realizing actionable earthquake warnings. The issues experienced in this earthquake have already been addressed in subsequent QuakeGuard product generations, with downstream connection monitoring and backup power for critical systems. The fact that the fire fighters in Vallejo were afforded even two seconds of warning at these epicentral distances results from the design of the QuakeGuard devices, which focuses on rapid false positive rejection and ground motion estimates. We discuss the performance of the ground motion estimation algorithms, with an emphasis on the accuracy and timeliness of the estimates at close epicentral distances.
Generalized regular genus for manifolds with boundary
Directory of Open Access Journals (Sweden)
Paola Cristofori
2003-05-01
Full Text Available We introduce a generalization of the regular genus, a combinatorial invariant of PL manifolds ([10], which is proved to be strictly related, in dimension three, to generalized Heegaard splittings defined in [12].
Geometric regularizations and dual conifold transitions
International Nuclear Information System (INIS)
Landsteiner, Karl; Lazaroiu, Calin I.
2003-01-01
We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)
Fast and compact regular expression matching
DEFF Research Database (Denmark)
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...... to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way....
Regular-fat dairy and human health
DEFF Research Database (Denmark)
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....
Deterministic automata for extended regular expressions
Directory of Open Access Journals (Sweden)
Syzdykov Mirzakhmet
2017-12-01
Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.
Regularities of intermediate adsorption complex relaxation
International Nuclear Information System (INIS)
Manukova, L.A.
1982-01-01
The experimental data, characterizing the regularities of intermediate adsorption complex relaxation in the polycrystalline Mo-N 2 system at 77 K are given. The method of molecular beam has been used in the investigation. The analytical expressions of change regularity in the relaxation process of full and specific rates - of transition from intermediate state into ''non-reversible'', of desorption into the gas phase and accumUlation of the particles in the intermediate state are obtained
Online Manifold Regularization by Dual Ascending Procedure
Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui
2013-01-01
We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...
Earthquake predictions using seismic velocity ratios
Sherburne, R. W.
1979-01-01
Since the beginning of modern seismology, seismologists have contemplated predicting earthquakes. The usefulness of earthquake predictions to the reduction of human and economic losses and the value of long-range earthquake prediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquake prediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquake prediction is a worthwhile goal and should be prusued with a sense of urgency.
Measuring the size of an earthquake
Spence, W.; Sipkin, S.A.; Choy, G.L.
1989-01-01
Earthquakes range broadly in size. A rock-burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat Island earthquake in the Aleutian arc involved a 650-kilometer length of the Earth's crust. Earthquakes can be even smaller and even larger. If an earthquake is felt or causes perceptible surface damage, then its intensity of shaking can be subjectively estimated. But many large earthquakes occur in oceanic areas or at great focal depths and are either simply not felt or their felt pattern does not really indicate their true size.
Earthquakes-Rattling the Earth's Plumbing System
Sneed, Michelle; Galloway, Devin L.; Cunningham, William L.
2003-01-01
Hydrogeologic responses to earthquakes have been known for decades, and have occurred both close to, and thousands of miles from earthquake epicenters. Water wells have become turbid, dry or begun flowing, discharge of springs and ground water to streams has increased and new springs have formed, and well and surface-water quality have become degraded as a result of earthquakes. Earthquakes affect our Earth’s intricate plumbing system—whether you live near the notoriously active San Andreas Fault in California, or far from active faults in Florida, an earthquake near or far can affect you and the water resources you depend on.
Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years
Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.
2010-01-01
We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.
Aspect of the 2011 off the Pacific coast Tohoku Earthquake, Japan
International Nuclear Information System (INIS)
Kato, Aitaro
2012-01-01
The 2011 off the Pacific coast of Tohoku Earthquake (Tohoku-Oki), Japan, was the first magnitude (M) 9 subduction megathrust event to be recorded by a dense network of seismic, geodetic, and tsunami observations. I here review the Tohoku-Oki earthquake in terms of, 1) asperity model, 2) earthquake source observations, 3) precedent processes, 4) postseismic slip (afetrslip). Based on finite source models of the Tohoku-Oki mainshock, the coseismic fault slip exceeded 30 m at shallow part of the subduction zone off-shore of Miyagi. The rupture reached the trench axis, producing a large uplift therein, which was likely an important factor generating devastating tsunami waves. The mainshock was preceded by slow-slip transients propagating toward the initial rupture point, which may have caused substantial stress loading, prompting the unstable dynamic rupture of the mainshock. Furthermore, a sequence of M 7-class interplate earthquakes and subsequent large afterslip events, those occurred before the mainshock rupture, might be interpreted as preparation stage of the earthquake generation. Most of slip released by the postseismic deformation following the Tohoku-Oki mainshock is located in the region peripheral to the large coseismic slip area. (author)
Progress in Understanding the Pre-Earthquake Associated Events by Analyzing IR Satellite Data
Ouzounov, Dimitar; Taylor, Patrick; Bryant, Nevin
2004-01-01
We present latest result in understanding the potential relationship between tectonic stress, electro-chemical and thermodynamic processes in the Earths crust and atmosphere with an increase in IR flux as a potential signature of electromagnetic (EM) phenomena that are related to earthquake activity, either pre-, co- or post seismic. Thermal infra-red (TIR) surveys performed by the polar orbiting (NOAA/AVHRR MODIS) and geosynchronous weather satellites (GOES, METEOSAT) gave an indication of the appearance (from days to weeks before the event) of "anomalous" space-time TIR transients that are associated with the location (epicenter and local tectonic structures) and time of a number of major earthquakes with M>5 and focal depths less than 50km. We analyzed broad category of associated pre-earthquake events, which provided evidence for changes in surface temperature, surface latent heat flux, chlorophyll concentrations, soil moisture, brightness temperature, emissivity of surface, water vapour in the atmosphere prior to the earthquakes occurred in Algeria, India, Iran, Italy, Mexico and Japan. The cause of such anomalies has been mainly related to the change of near-surface thermal properties due to complex lithosphere-hydrosphere-atmospheric interactions. As final results we present examples from the most recent (2000-2004) worldwide strong earthquakes and the techniques used to capture the tracks of EM emission mid-IR anomalies and a methodology for practical future use of such phenomena in the early warning systems.
Effects of acoustic waves on stick-slip in granular media and implications for earthquakes
Johnson, P.A.; Savage, H.; Knuth, M.; Gomberg, J.; Marone, Chris
2008-01-01
It remains unknown how the small strains induced by seismic waves can trigger earthquakes at large distances, in some cases thousands of kilometres from the triggering earthquake, with failure often occurring long after the waves have passed. Earthquake nucleation is usually observed to take place at depths of 10-20 km, and so static overburden should be large enough to inhibit triggering by seismic-wave stress perturbations. To understand the physics of dynamic triggering better, as well as the influence of dynamic stressing on earthquake recurrence, we have conducted laboratory studies of stick-slip in granular media with and without applied acoustic vibration. Glass beads were used to simulate granular fault zone material, sheared under constant normal stress, and subject to transient or continuous perturbation by acoustic waves. Here we show that small-magnitude failure events, corresponding to triggered aftershocks, occur when applied sound-wave amplitudes exceed several microstrain. These events are frequently delayed or occur as part of a cascade of small events. Vibrations also cause large slip events to be disrupted in time relative to those without wave perturbation. The effects are observed for many large-event cycles after vibrations cease, indicating a strain memory in the granular material. Dynamic stressing of tectonic faults may play a similar role in determining the complexity of earthquake recurrence. ??2007 Nature Publishing Group.
Explosive and radio-selected Transients: Transient Astronomy with ...
Indian Academy of Sciences (India)
40
sitive measurements will lead to very accurate mass loss estimation in these supernovae. .... transients are powerful probes of intervening media owing to dispersion ...... A., & Chandra, P. 2011, Nature Communications,. 2, 175. Chakraborti, S.
Summary of earthquake experience database
International Nuclear Information System (INIS)
1999-01-01
Strong-motion earthquakes frequently occur throughout the Pacific Basin, where power plants or industrial facilities are included in the affected areas. By studying the performance of these earthquake-affected (or database) facilities, a large inventory of various types of equipment installations can be compiled that have experienced substantial seismic motion. The primary purposes of the seismic experience database are summarized as follows: to determine the most common sources of seismic damage, or adverse effects, on equipment installations typical of industrial facilities; to determine the thresholds of seismic motion corresponding to various types of seismic damage; to determine the general performance of equipment during earthquakes, regardless of the levels of seismic motion; to determine minimum standards in equipment construction and installation, based on past experience, to assure the ability to withstand anticipated seismic loads. To summarize, the primary assumption in compiling an experience database is that the actual seismic hazard to industrial installations is best demonstrated by the performance of similar installations in past earthquakes
Earthquake design for controlled structures
Directory of Open Access Journals (Sweden)
Nikos G. Pnevmatikos
2017-04-01
Full Text Available An alternative design philosophy, for structures equipped with control devices, capable to resist an expected earthquake while remaining in the elastic range, is described. The idea is that a portion of the earthquake loading is under¬taken by the control system and the remaining by the structure which is designed to resist elastically. The earthquake forces assuming elastic behavior (elastic forces and elastoplastic behavior (design forces are first calculated ac¬cording to the codes. The required control forces are calculated as the difference from elastic to design forces. The maximum value of capacity of control devices is then compared to the required control force. If the capacity of the control devices is larger than the required control force then the control devices are accepted and installed in the structure and the structure is designed according to the design forces. If the capacity is smaller than the required control force then a scale factor, α, reducing the elastic forces to new design forces is calculated. The structure is redesigned and devices are installed. The proposed procedure ensures that the structure behaves elastically (without damage for the expected earthquake at no additional cost, excluding that of buying and installing the control devices.
Using Smartphones to Detect Earthquakes
Kong, Q.; Allen, R. M.
2012-12-01
We are using the accelerometers in smartphones to record earthquakes. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. Given the potential number of smartphones, and small separation of sensors, this new type of seismic dataset has significant potential provides that the signal can be separated from the noise. We developed an application for android phones to record the acceleration in real time. These records can be saved on the local phone or transmitted back to a server in real time. The accelerometers in the phones were evaluated by comparing performance with a high quality accelerometer while located on controlled shake tables for a variety of tests. The results show that the accelerometer in the smartphone can reproduce the characteristic of the shaking very well, even the phone left freely on the shake table. The nature of these datasets is also quite different from traditional networks due to the fact that smartphones are moving around with their owners. Therefore, we must distinguish earthquake signals from other daily use. In addition to the shake table tests that accumulated earthquake records, we also recorded different human activities such as running, walking, driving etc. An artificial neural network based approach was developed to distinguish these different records. It shows a 99.7% successful rate of distinguishing earthquakes from the other typical human activities in our database. We are now at the stage ready to develop the basic infrastructure for a smartphone seismic network.
Explanation of earthquake response spectra
Douglas, John
2017-01-01
This is a set of five slides explaining how earthquake response spectra are derived from strong-motion records and simple models of structures and their purpose within seismic design and assessment. It dates from about 2002 and I have used it in various introductory lectures on engineering seismology.
Transient-Switch-Signal Suppressor
Bozeman, Richard J., Jr.
1995-01-01
Circuit delays transmission of switch-opening or switch-closing signal until after preset suppression time. Used to prevent transmission of undesired momentary switch signal. Basic mode of operation simple. Beginning of switch signal initiates timing sequence. If switch signal persists after preset suppression time, circuit transmits switch signal to external circuitry. If switch signal no longer present after suppression time, switch signal deemed transient, and circuit does not pass signal on to external circuitry, as though no transient switch signal. Suppression time preset at value large enough to allow for damping of underlying pressure wave or other mechanical transient.
Electromagnetic transients in power cables
da Silva, Filipe Faria
2013-01-01
From the more basic concepts to the most advanced ones where long and laborious simulation models are required, Electromagnetic Transients in Power Cables provides a thorough insight into the study of electromagnetic transients and underground power cables. Explanations and demonstrations of different electromagnetic transient phenomena are provided, from simple lumped-parameter circuits to complex cable-based high voltage networks, as well as instructions on how to model the cables.Supported throughout by illustrations, circuit diagrams and simulation results, each chapter contains exercises,
Solar eruptions - soil radon - earthquakes
International Nuclear Information System (INIS)
Saghatelyan, E.; Petrosyan, L.; Aghbalyan, Yu.; Baburyan, M.; Araratyan, L.
2004-01-01
For the first time a new natural phenomenon was established: a contrasting increase in the soil radon level under the influence of solar flares. Such an increase is one of geochemical indicators of earthquakes. Most researchers consider this a phenomenon of exclusively terrestrial processes. Investigations regarding the link of earthquakes to solar activity carried out during the last decade in different countries are based on the analysis of statistical data ΣΕ (t) and W (t). As established, the overall seismicity of the Earth and its separate regions depends of an 11-year long cycle of solar activity. Data provided in the paper based on experimental studies serve the first step on the way of experimental data on revealing cause-and-reason solar-terrestrials bonds in a series s olar eruption-lithosphere radon-earthquakes . They need further collection of experimental data. For the first time, through radon constituent of terrestrial radiation objectification has been made of elementary lattice of the Hartmann's network contoured out by bio location method. As found out, radon concentration variations in Hartmann's network nodes determine the dynamics of solar-terrestrial relationships. Of the three types of rapidly running processes conditioned by solar-terrestrial bonds earthquakes are attributed to rapidly running destructive processes that occur in the most intense way at the juncture of tectonic massifs, along transformed and deep failures. The basic factors provoking the earthquakes are both magnetic-structural effects and a long-term (over 5 months) bombing of the surface of lithosphere by highly energetic particles of corpuscular solar flows, this being approved by photometry. As a result of solar flares that occurred from 29 October to 4 November 2003, a sharply contrasting increase in soil radon was established which is an earthquake indicator on the territory of Yerevan City. A month and a half later, earthquakes occurred in San-Francisco, Iran, Turkey
Liu, L.; He, K.; Mehl, R.; Wang, W.; Chen, Q.
2008-12-01
High-resolution near-surface geologic information is essential for earthquake ground motion prediction. The near-surface geology forms the critical constituent to influence seismic wave propagation, which is known as the local site effects. We have collected microtremor data over 1000 sites in Beijing area for extracting the much needed earthquake engineering parameters (primarily sediment thickness, with the shear wave velocity profiling at a few important control points) in this heavily populated urban area. Advanced data processing algorithms are employed in various stages in assessing the local site effect on earthquake ground motion. First, we used the empirical mode decomposition (EMD), also known as the Hilbert-Huang transform (HHT), to enhance the microtremor data analysis by excluding the local transients and continuous monochromic industrial noises. With this enhancement we have significantly increased the number of data points to be useful in delineating sediment thickness in this area. Second, we have used the cross-correlation of microtremor data acquired for the pairs of two adjacent sites to generate a 'pseudo-reflection' record, which can be treated as the Green function of the 1D layered earth model at the site. The sediment thickness information obtained this way is also consistent with the results obtained by the horizontal to vertical spectral ratio method (HVSR). For most sites in this area, we can achieve 'self consistent' results among different processing skechems regarding to the sediment thickness - the fundamental information to be used in assessing the local site effect. Finally, the pseudo-spectral time domain method was used to simulate the seismic wave propagation caused by a scenario earthquake in this area - the 1679 M8 Sanhe-pinggu earthquake. The characteristics of the simulated earthquake ground motion have found a general correlation with the thickness of the sediments in this area. And more importantly, it is also in agreement
Doser, D.I.; Olsen, K.B.; Pollitz, F.F.; Stein, R.S.; Toda, S.
2009-01-01
The occurrence of a right-lateral strike-slip earthquake in 1911 is inconsistent with the calculated 0.2-2.5 bar static stress decrease imparted by the 1906 rupture at that location on the Calaveras fault, and 5 yr of calculated post-1906 viscoelastic rebound does little to reload the fault. We have used all available first-motion, body-wave, and surface-wave data to explore possible focal mechanisms for the 1911 earthquake. We find that the event was most likely a right-lateral strikeslip event on the Calaveras fault, larger than, but otherwise resembling, the 1984 Mw 6.1 Morgan Hill earthquake in roughly the same location. Unfortunately, we could recover no unambiguous surface fault offset or geodetic strain data to corroborate the seismic analysis despite an exhaustive archival search. We calculated the static and dynamic Coulomb stress changes for three 1906 source models to understand stress transfer to the 1911 site. In contrast to the static stress shadow, the peak dynamic Coulomb stress imparted by the 1906 rupture promoted failure at the site of the 1911 earthquake by 1.4-5.8 bar. Perhaps because the sample is small and the aftershocks are poorly located, we find no correlation of 1906 aftershock frequency or magnitude with the peak dynamic stress, although all aftershocks sustained a calculated dynamic stress of ???3 bar. Just 20 km to the south of the 1911 epicenter, we find that surface creep of the Calaveras fault at Hollister paused for ~17 yr after 1906, about the expected delay for the calculated static stress drop imparted by the 1906 earthquake when San Andreas fault postseismic creep and viscoelastic relaxation are included. Thus, the 1911 earthquake may have been promoted by the transient dynamic stresses, while Calaveras fault creep 20 km to the south appears to have been inhibited by the static stress changes.
Lessons of L'Aquila for Operational Earthquake Forecasting
Jordan, T. H.
2012-12-01
and failures-to-predict. The best way to achieve this separation is to use probabilistic rather than deterministic statements in characterizing short-term changes in seismic hazards. The ICEF recommended establishing OEF systems that can provide the public with open, authoritative, and timely information about the short-term probabilities of future earthquakes. Because the public needs to be educated into the scientific conversation through repeated communication of probabilistic forecasts, this information should be made available at regular intervals, during periods of normal seismicity as well as during seismic crises. In an age of nearly instant information and high-bandwidth communication, public expectations regarding the availability of authoritative short-term forecasts are rapidly evolving, and there is a greater danger that information vacuums will spawn informal predictions and misinformation. L'Aquila demonstrates why the development of OEF capabilities is a requirement, not an option.
Transient or permanent fisheye views
DEFF Research Database (Denmark)
Jakobsen, Mikkel Rønne; Hornbæk, Kasper
2012-01-01
Transient use of information visualization may support specific tasks without permanently changing the user interface. Transient visualizations provide immediate and transient use of information visualization close to and in the context of the user’s focus of attention. Little is known, however......, about the benefits and limitations of transient visualizations. We describe an experiment that compares the usability of a fisheye view that participants could call up temporarily, a permanent fisheye view, and a linear view: all interfaces gave access to source code in the editor of a widespread...... programming environment. Fourteen participants performed varied tasks involving navigation and understanding of source code. Participants used the three interfaces for between four and six hours in all. Time and accuracy measures were inconclusive, but subjective data showed a preference for the permanent...
Transient thyrotoxicosis during nivolumab treatment
van Kooten, M. J.; van den Berg, G.; Glaudemans, A. W. J. M.; Hiltermann, T. J. N.; Groen, H. J. M.; Rutgers, A.; Links, T. P.
Two patients presented with transient thyrotoxicosis within 2-4 weeks after starting treatment with nivolumab. This thyrotoxicosis turned into hypothyroidism within 6-8 weeks. Temporary treatment with a beta blocker may be sufficient.
Couston, L.; Mei, C.; Alam, M.
2013-12-01
A large number of lakes are surrounded by steep and unstable mountains with slopes prone to failure. As a result, landslides are likely to occur and impact water sitting in closed reservoirs. These rare geological phenomena pose serious threats to dam reservoirs and nearshore facilities because they can generate unexpectedly large tsunami waves. In fact, the tallest wave experienced by contemporary humans occurred because of a landslide in the narrow bay of Lituya in 1958, and five years later, a deadly landslide tsunami overtopped Lake Vajont's dam, flooding and damaging villages along the lakefront and in the Piave valley. If unstable slopes and potential slides are detected ahead of time, inundation maps can be drawn to help people know the risks, and mitigate the destructive power of the ensuing waves. These maps give the maximum wave runup height along the lake's vertical and sloping boundaries, and can be obtained by numerical simulations. Keeping track of the moving shorelines along beaches is challenging in classical Eulerian formulations because the horizontal extent of the fluid domain can change over time. As a result, assuming a solid slide and nonbreaking waves, here we develop a nonlinear shallow-water model equation in the Lagrangian framework to address the problem of transient landslide-tsunamis. In this manner, the shorelines' three-dimensional motion is part of the solution. The model equation is hyperbolic and can be solved numerically by finite differences. Here, a 4th order Runge-Kutta method and a compact finite-difference scheme are implemented to integrate in time and spatially discretize the forced shallow-water equation in Lagrangian coordinates. The formulation is applied to different lake and slide geometries to better understand the effects of the lake's finite lengths and slide's forcing mechanism on the generated wavefield. Specifically, for a slide moving down a plane beach, we show that edge-waves trapped by the shoreline and free
International Nuclear Information System (INIS)
Hsu, Y.Y.
1974-01-01
The following papers related to two-phase flow are summarized: current assumptions made in two-phase flow modeling; two-phase unsteady blowdown from pipes, flow pattern in Laval nozzle and two-phase flow dynamics; dependence of radial heat and momentum diffusion; transient behavior of the liquid film around the expanding gas slug in a vertical tube; flooding phenomena in BWR fuel bundles; and transient effects in bubble two-phase flow. (U.S.)
Applications of the gambling score in evaluating earthquake predictions and forecasts
Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe
2010-05-01
This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.
Sagala, Ricardo Alfencius; Harjadi, P. J. Prih; Heryandoko, Nova; Sianipar, Dimas
2017-07-01
Sumatra was one of the most high seismicity regions in Indonesia. The subduction of Indo-Australian plate beneath Eurasian plate in western Sumatra contributes for many significant earthquakes that occur in this area. These earthquake events can be used to analyze the seismotectonic of Sumatra subduction zone and its system. In this study we use teleseismic double-difference method to obtain more high precision earthquake distribution in Sumatra subduction zone. We use a 3D nested regional-global velocity model. We use a combination of data from both of ISC (International Seismological Center) and BMKG (Agency for Meteorology Climatology and Geophysics, Indonesia). We successfully relocate about 6886 earthquakes that occur on period of 1981-2015. We consider that this new location is more precise than the regular bulletin. The relocation results show greatly reduced of RMS residual of travel time. Using this data, we can construct a new seismotectonic map of Sumatra. A well-built geometry of subduction slab, faults and volcano arc can be obtained from the new bulletin. It is also showed that at a depth of 140-170 km, there is many events occur as moderate-to-deep earthquakes, and we consider about the relation of the slab's events with volcanic arc and inland fault system. A reliable slab model is also built from regression equation using new relocated data. We also analyze the spatial-temporal of seismotectonic using b-value mapping that inspected in detail horizontally and vertically cross-section.
Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources
Jia, Z.; Zhan, Z.
2017-12-01
Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.
Regular Expression Matching and Operational Semantics
Directory of Open Access Journals (Sweden)
Asiri Rathnayake
2011-08-01
Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.
Regularities, Natural Patterns and Laws of Nature
Directory of Open Access Journals (Sweden)
Stathis Psillos
2014-02-01
Full Text Available The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology. Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.
Napa earthquake: An earthquake in a highly connected world
Bossu, R.; Steed, R.; Mazet-Roux, G.; Roussel, F.
2014-12-01
The Napa earthquake recently occurred close to Silicon Valley. This makes it a good candidate to study what social networks, wearable objects and website traffic analysis (flashsourcing) can tell us about the way eyewitnesses react to ground shaking. In the first part, we compare the ratio of people publishing tweets and with the ratio of people visiting EMSC (European Mediterranean Seismological Centre) real time information website in the first minutes following the earthquake occurrence to the results published by Jawbone, which show that the proportion of people waking up depends (naturally) on the epicentral distance. The key question to evaluate is whether the proportions of inhabitants tweeting or visiting the EMSC website are similar to the proportion of people waking up as shown by the Jawbone data. If so, this supports the premise that all methods provide a reliable image of the relative ratio of people waking up. The second part of the study focuses on the reaction time for both Twitter and EMSC website access. We show, similarly to what was demonstrated for the Mineral, Virginia, earthquake (Bossu et al., 2014), that hit times on the EMSC website follow the propagation of the P waves and that 2 minutes of website traffic is sufficient to determine the epicentral location of an earthquake on the other side of the Atlantic. We also compare with the publication time of messages on Twitter. Finally, we check whether the number of tweets and the number of visitors relative to the number of inhabitants is correlated to the local level of shaking. Together these results will tell us whether the reaction of eyewitnesses to ground shaking as observed through Twitter and the EMSC website analysis is tool specific (i.e. specific to Twitter or EMSC website) or whether they do reflect people's actual reactions.
The Pan-STARRS Survey for Transients (PSST)
Huber, Mark; Carter Chambers, Kenneth; Flewelling, Heather; Smartt, Stephen J.; Smith, Ken; Wright, Darryl
2015-08-01
The Pan-STARRS1 (PS1) Science Consortium finished the 3Pi survey of the whole sky north of -30 degrees between 2010-2014 in grizy (PS1 specific filters) and the PS1 telescope has been running a wide-field survey for near earth objects, funded by NASA through the NEO Observation Program. This survey takes data in a w-band (wide-band filter spanning g,r,i) in dark time, and combinations of r, i, z and y during bright time. We are now processing these data through the Pan-STARRS IPP difference imaging pipeline and recovering stationary transients. Effectively the 3Pi survey for transients that started during the PS1 Science Consortium is being continued under the new NEO optimized operations mode. The observing procedure in this case is to take a quad of exposures, typically 30-45 seconds separated by 10-20 minutes each, typically revealing high confidence transients (greater than 5-sigma) to depths of i~ 20.7, y~18.3 (AB mags). This cadence may be repeated on subsequent nights in a return pointing.Continuing the public release of the first 880 transients from the PS1 3Pi survey during the search period September 2013 - January 2014, beginning February 2015, the transient events using the data from the the Pan-STARRS NEO Science Consortium are now regularly added. These are mostly supernova candidates, but the list also contains some variable stars, AGN, and nuclear transients. The light curves are too sparsely sampled to be of standalone use, but they may be of use to the community in combining with existing data (e.g. Fraser et al. 2013, ApJ, 779, L8), constraining explosion and rise times (e.g. Nicholl et al. 2013, Nature, 502, 346) as well as many being new discoveries.For additional details visit http://star.pst.qub.ac.uk/ps1threepi/
Countermeasures to earthquakes in nuclear plants
International Nuclear Information System (INIS)
Sato, Kazuhide
1979-01-01
The contribution of atomic energy to mankind is unmeasured, but the danger of radioactivity is a special thing. Therefore in the design of nuclear power plants, the safety has been regarded as important, and in Japan where earthquakes occur frequently, the countermeasures to earthquakes have been incorporated in the examination of safety naturally. The radioactive substances handled in nuclear power stations and spent fuel reprocessing plants are briefly explained. The occurrence of earthquakes cannot be predicted effectively, and the disaster due to earthquakes is apt to be remarkably large. In nuclear plants, the prevention of damage in the facilities and the maintenance of the functions are required at the time of earthquakes. Regarding the location of nuclear plants, the history of earthquakes, the possible magnitude of earthquakes, the properties of ground and the position of nuclear plants should be examined. After the place of installation has been decided, the earthquake used for design is selected, evaluating live faults and determining the standard earthquakes. As the fundamentals of aseismatic design, the classification according to importance, the earthquakes for design corresponding to the classes of importance, the combination of loads and allowable stress are explained. (Kako, I.)
Update earthquake risk assessment in Cairo, Egypt
Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan
2017-07-01
The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety
Fractional Regularization Term for Variational Image Registration
Directory of Open Access Journals (Sweden)
Rafael Verdú-Monedero
2009-01-01
Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.
International Nuclear Information System (INIS)
Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.
2004-01-01
We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)
Online Manifold Regularization by Dual Ascending Procedure
Directory of Open Access Journals (Sweden)
Boliang Sun
2013-01-01
Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.
International Nuclear Information System (INIS)
Dan, Kazuo
2006-01-01
The Regulatory Guide for Aseismic Design of Nuclear Reactor Facilities was revised on 19 th September, 2006. Six factors for evaluation of earthquake vibration are considered on the basis of the recent earthquakes. They are 1) evaluation of earthquake vibration by method using fault model, 2) investigation and approval of active fault, 3) direct hit earthquake, 4) assumption of the short active fault as the hypocentral fault, 5) locality of the earthquake and the earthquake vibration and 6) remaining risk. A guiding principle of revision required new evaluation method of earthquake vibration using fault model, and evaluation of probability of earthquake vibration. The remaining risk means the facilities and people get into danger when stronger earthquake than the design occurred, accordingly, the scattering has to be considered at evaluation of earthquake vibration. The earthquake belt of Hyogo-Nanbu earthquake and strong vibration pulse in 1995, relation between length of surface earthquake fault and hypocentral fault, and distribution of seismic intensity of off Kushiro in 1993 are shown. (S.Y.)
Directory of Open Access Journals (Sweden)
Sally Johnston
2012-06-01
Full Text Available Problem: Two earthquakes recently struck the Christchurch region. The 2010 earthquake in Canterbury was strong yet sustained less damage than the 2011 earthquake in Christchurch, which although not as strong, was more damaging and resulted in 185 deaths. Both required activation of a food safety response.Context: The food safety response for both earthquakes was focused on reducing the risk of gastroenteritis by limiting the use of contaminated water and food, both in households and food businesses. Additional food safety risks were identified in the 2011 Christchurch earthquake due the use of large-scale catering for rescue workers, volunteers and residents unable to return home.Action: Using a risk assessment framework, the food safety response involved providing water and food safety advice, issuing a boil water notice for the region and initiating water testing on reticulation systems. Food businesses were contacted to ensure the necessary measures were being taken. Additional action during the 2011 Christchurch earthquake response included making contact with food businesses using checklists and principles developed in the first response and having regular contact with those providing catering for large numbers.Outcome: In the 2010 earthquake in Canterbury, several cases of gastroenteritis were reported, although most resulted from person-to-person contact rather than contamination of food. There was a small increase in gastroenteritis cases following the 2011 Christchurch earthquake.Discussion: The food safety response for both earthquakes was successful in meeting the goal of ensuring that foodborne illness did not put additional pressure on hospitals or affect search and rescue efforts.
Evidence of a tectonic transient within the Idrija fault system in Western Slovenia
Vičič, Blaž; Costa, Giovanni; Aoudia, Abdelkrim
2017-04-01
within the period 2006-2016 is investigated. As a result, high temporal correlation in the years 2009 and 2010 of different bursts of seismicity all along Idrija fault system is observed. These bursts of semsicity located at seismogenic depths do also correlate well with clear changes within the pattern of surface deformation as exhibited by the continuous recording on the tm-71 extensometer in Postojna cave. Four small clusters of seismicity start in late 2009 in north-western part of Idrija fault system, migrating along the neighbouring faults in the region through 2010, together forming a swarm-like cluster of seismicity. In the same time period seismic swarm took place along Predjama fault, which is monitored by the Postojna extensometer and lasts more than 1 year. Finally, in September 2010 elevated seismicity of Idrija fault system finishes with two Mw>3.5 earthquakes in the south-eastern part of Idrija fault system. In this study we report a clear time dependent tectonic transient that took place along the Idrija fault system between 2009 and 2010 and discuss the physics of earthquake swarms vs. the mechanics of active faults and the related seismogenesis.
A critical review of Electric Earthquake Precursors
Directory of Open Access Journals (Sweden)
F. Vallianatos
2001-06-01
Full Text Available The generation of transient electric potential prior to rupture has been demonstrated in a number of laboratory experiments involving both dry and wet rock specimens. Several different electrification effects are responsible for these observations, but how these may scale up co-operatively in large heterogeneous rock volumes, to produce observable macroscopic signals, is still incompletely understood. Accordingly, the nature and properties of possible Electric Earthquake Precursors (EEP are still inadequately understood. For a long time observations have been fragmentary, narrow band and oligo-parametric (for instance, the magnetic field was not routinely measured. In general, the discrimination of purported EEP signals relied on "experience" and ad hoc empirical rules that could be shown unable to guarantee the validity of the data. In consequence, experimental studies have produced a prolific variety of signal shape, complexity and duration but no explanation for the apparently indefinite diversity. A set of inconsistent or conflicting ideas attempted to explain such observations, including different concepts about the EEP source region (near the observer or at the earthquake focus and propagation (frequently assumed to be guided by peculiar geoelectric structure. Statistics was also applied to establish the "beyond chance" association between presumed EEP signals and earthquakes. In the absence of well constrained data, this approach ended up with intense debate and controversy but no useful results. The response of the geophysical community was scepticism and by the mid-90's, the very existence of EEP was debated. At that time, a major re-thinking of EEP research began to take place, with reformulation of its queries and objectives and refocusing on the exploration of fundamental concepts, less on field experiments. The first encouraging results began to appear in the last two years of the 20th century. Observation technologies are mature
A critical review of electric earthquake precursors
Energy Technology Data Exchange (ETDEWEB)
Tzanis, A. [Athens Univ., Athens (Italy). Dept. of Geophysics and Geothermy; Valliantos, F. [Technological Educational Institute of Crete, Chania (Greece)
2001-04-01
The generation of transient electric potential prior to rupture has been demonstrated in a number of laboratory experiments involving both dry and wet rock specimens. Several different electrification effects are responsible for these observations, but how these may scale up co-operatively in large heterogeneous rock volumes, to produce observable macroscopic signals, is still incompletely understood. Accordingly, the nature and properties of possible Electric Earthquake Precursors (EEP) are still inadequately understood. For a long time observations have been fragmentary, narrow band and oligo-parametric (for instance, the magnetic field was not routinely measured). In general, the discrimination of purported EEP signals relied on experience and ad hoc empirical rules that could be shown unable to guarantee the validity of the data. In consequence, experimental studies have produced a prolific variety of signal shape, complexity and duration but no explanation for the apparently indefinite diversity. A set of inconsistent or conflicting ideas attempted to explain such observations, including different concepts about the EEP source region (near the observer or at the earthquake focus) and propagation (frequently assumed to be guided by peculiar geo electric structure). Statistics was also applied to establish the beyond chance association between presumed EEP signals and earthquakes. In the absence of well constrained data, this approach ended up with intense debate and controversy but no useful results. The response of the geophysical community was scepticism and by the mid-90's, the very existence of EEP was debated. At that time, a major re-thinking of EEP research began to take place, with reformulation of its queries and objectives and refocusing on the exploration of fundamental concepts, less on field experiments. The firs encouraging results began to appear in the last two years of the 20th century. Observation technologies are mature and can guarantee
A smartphone application for earthquakes that matter!
Bossu, Rémy; Etivant, Caroline; Roussel, Fréderic; Mazet-Roux, Gilles; Steed, Robert
2014-05-01
Smartphone applications have swiftly become one of the most popular tools for rapid reception of earthquake information for the public, some of them having been downloaded more than 1 million times! The advantages are obvious: wherever someone's own location is, they can be automatically informed when an earthquake has struck. Just by setting a magnitude threshold and an area of interest, there is no longer the need to browse the internet as the information reaches you automatically and instantaneously! One question remains: are the provided earthquake notifications always relevant for the public? What are the earthquakes that really matters to laypeople? One clue may be derived from some newspaper reports that show that a while after damaging earthquakes many eyewitnesses scrap the application they installed just after the mainshock. Why? Because either the magnitude threshold is set too high and many felt earthquakes are missed, or it is set too low and the majority of the notifications are related to unfelt earthquakes thereby only increasing anxiety among the population at each new update. Felt and damaging earthquakes are the ones that matter the most for the public (and authorities). They are the ones of societal importance even when of small magnitude. A smartphone application developed by EMSC (Euro-Med Seismological Centre) with the financial support of the Fondation MAIF aims at providing suitable notifications for earthquakes by collating different information threads covering tsunamigenic, potentially damaging and felt earthquakes. Tsunamigenic earthquakes are considered here to be those ones that are the subject of alert or information messages from the PTWC (Pacific Tsunami Warning Centre). While potentially damaging earthquakes are identified through an automated system called EQIA (Earthquake Qualitative Impact Assessment) developed and operated at EMSC. This rapidly assesses earthquake impact by comparing the population exposed to each expected
Regular transport dynamics produce chaotic travel times.
Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro
2014-06-01
In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.
Regularity of difference equations on Banach spaces
Agarwal, Ravi P; Lizama, Carlos
2014-01-01
This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.
PET regularization by envelope guided conjugate gradients
International Nuclear Information System (INIS)
Kaufman, L.; Neumaier, A.
1996-01-01
The authors propose a new way to iteratively solve large scale ill-posed problems and in particular the image reconstruction problem in positron emission tomography by exploiting the relation between Tikhonov regularization and multiobjective optimization to obtain iteratively approximations to the Tikhonov L-curve and its corner. Monitoring the change of the approximate L-curves allows us to adjust the regularization parameter adaptively during a preconditioned conjugate gradient iteration, so that the desired solution can be reconstructed with a small number of iterations
Matrix regularization of embedded 4-manifolds
International Nuclear Information System (INIS)
Trzetrzelewski, Maciej
2012-01-01
We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).
Transient magnetoviscosity of dilute ferrofluids
International Nuclear Information System (INIS)
Soto-Aquino, Denisse; Rinaldi, Carlos
2011-01-01
The magnetic field induced change in the viscosity of a ferrofluid, commonly known as the magnetoviscous effect and parameterized through the magnetoviscosity, is one of the most interesting and practically relevant aspects of ferrofluid phenomena. Although the steady state behavior of ferrofluids under conditions of applied constant magnetic fields has received considerable attention, comparatively little attention has been given to the transient response of the magnetoviscosity to changes in the applied magnetic field or rate of shear deformation. Such transient response can provide further insight into the dynamics of ferrofluids and find practical application in the design of devices that take advantage of the magnetoviscous effect and inevitably must deal with changes in the applied magnetic field and deformation. In this contribution Brownian dynamics simulations and a simple model based on the ferrohydrodynamics equations are applied to explore the dependence of the transient magnetoviscosity for two cases: (I) a ferrofluid in a constant shear flow wherein the magnetic field is suddenly turned on, and (II) a ferrofluid in a constant magnetic field wherein the shear flow is suddenly started. Both simulations and analysis show that the transient approach to a steady state magnetoviscosity can be either monotonic or oscillatory depending on the relative magnitudes of the applied magnetic field and shear rate. - Research Highlights: →Rotational Brownian dynamics simulations were used to study the transient behavior of the magnetoviscosity of ferrofluids. →Damped and oscillatory approach to steady state magnetoviscosity was observed for step changes in shear rate and magnetic field. →A model based on the ferrohydrodynamics equations qualitatively captured the damped and oscillatory features of the transient response →The transient behavior is due to the interplay of hydrodynamic, magnetic, and Brownian torques on the suspended particles.
Renormalization group theory of earthquakes
Directory of Open Access Journals (Sweden)
H. Saleur
1996-01-01
Full Text Available We study theoretically the physical origin of the proposed discrete scale invariance of earthquake processes, at the origin of the universal log-periodic corrections to scaling, recently discovered in regional seismic activity (Sornette and Sammis (1995. The discrete scaling symmetries which may be present at smaller scales are shown to be robust on a global scale with respect to disorder. Furthermore, a single complex exponent is sufficient in practice to capture the essential properties of the leading correction to scaling, whose real part may be renormalized by disorder, and thus be specific to the system. We then propose a new mechanism for discrete scale invariance, based on the interplay between dynamics and disorder. The existence of non-linear corrections to the renormalization group flow implies that an earthquake is not an isolated 'critical point', but is accompanied by an embedded set of 'critical points', its foreshocks and any subsequent shocks for which it may be a foreshock.
The 2016 Kumamoto earthquake sequence.
Kato, Aitaro; Nakamura, Kouji; Hiyama, Yohei
2016-01-01
Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An M j 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an M j 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest.
Earthquake lights and rupture processes
Directory of Open Access Journals (Sweden)
T. V. Losseva
2005-01-01
Full Text Available A physical model of earthquake lights is proposed. It is suggested that the magnetic diffusion from the electric and magnetic fields source region is a dominant process, explaining rather high localization of the light flashes. A 3D numerical code allowing to take into account the arbitrary distribution of currents caused by ground motion, conductivity in the ground and at its surface, including the existence of sea water above the epicenter or (and near the ruptured segments of the fault have been developed. Simulations for the 1995 Kobe earthquake were conducted taking into account the existence of sea water with realistic geometry of shores. The results do not contradict the eyewitness reports and scarce measurements of the electric and magnetic fields at large distances from the epicenter.
The 2016 Kumamoto earthquake sequence
KATO, Aitaro; NAKAMURA, Kouji; HIYAMA, Yohei
2016-01-01
Beginning in April 2016, a series of shallow, moderate to large earthquakes with associated strong aftershocks struck the Kumamoto area of Kyushu, SW Japan. An Mj 7.3 mainshock occurred on 16 April 2016, close to the epicenter of an Mj 6.5 foreshock that occurred about 28 hours earlier. The intense seismicity released the accumulated elastic energy by right-lateral strike slip, mainly along two known, active faults. The mainshock rupture propagated along multiple fault segments with different geometries. The faulting style is reasonably consistent with regional deformation observed on geologic timescales and with the stress field estimated from seismic observations. One striking feature of this sequence is intense seismic activity, including a dynamically triggered earthquake in the Oita region. Following the mainshock rupture, postseismic deformation has been observed, as well as expansion of the seismicity front toward the southwest and northwest. PMID:27725474
Dim prospects for earthquake prediction
Geller, Robert J.
I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquake prediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite
On the plant operators performance during earthquake
International Nuclear Information System (INIS)
Kitada, Y.; Yoshimura, S.; Abe, M.; Niwa, H.; Yoneda, T.; Matsunaga, M.; Suzuki, T.
1994-01-01
There is little data on which to judge the performance of plant operators during and after strong earthquakes. In order to obtain such data to enhance the reliability on the plant operation, a Japanese utility and a power plant manufacturer carried out a vibration test using a shaking table. The purpose of the test was to investigate operator performance, i.e., the quickness and correctness in switch handling and panel meter read-out. The movement of chairs during earthquake as also of interest, because if the chairs moved significantly or turned over during a strong earthquake, some arresting mechanism would be required for the chair. Although there were differences between the simulated earthquake motions used and actual earthquakes mainly due to the specifications of the shaking table, the earthquake motions had almost no influence on the operators of their capability (performance) for operating the simulated console and the personal computers
Earthquake evaluation of a substation network
International Nuclear Information System (INIS)
Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.
1991-01-01
The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system
Earthquake forewarning in the Cascadia region
Gomberg, Joan S.; Atwater, Brian F.; Beeler, Nicholas M.; Bodin, Paul; Davis, Earl; Frankel, Arthur; Hayes, Gavin P.; McConnell, Laura; Melbourne, Tim; Oppenheimer, David H.; Parrish, John G.; Roeloffs, Evelyn A.; Rogers, Gary D.; Sherrod, Brian; Vidale, John; Walsh, Timothy J.; Weaver, Craig S.; Whitmore, Paul M.
2015-08-10
This report, prepared for the National Earthquake Prediction Evaluation Council (NEPEC), is intended as a step toward improving communications about earthquake hazards between information providers and users who coordinate emergency-response activities in the Cascadia region of the Pacific Northwest. NEPEC charged a subcommittee of scientists with writing this report about forewarnings of increased probabilities of a damaging earthquake. We begin by clarifying some terminology; a “prediction” refers to a deterministic statement that a particular future earthquake will or will not occur. In contrast to the 0- or 100-percent likelihood of a deterministic prediction, a “forecast” describes the probability of an earthquake occurring, which may range from >0 to processes or conditions, which may include Increased rates of M>4 earthquakes on the plate interface north of the Mendocino region
Data base pertinent to earthquake design basis
International Nuclear Information System (INIS)
Sharma, R.D.
1988-01-01
Mitigation of earthquake risk from impending strong earthquakes is possible provided the hazard can be assessed, and translated into appropriate design inputs. This requires defining the seismic risk problem, isolating the risk factors and quantifying risk in terms of physical parameters, which are suitable for application in design. Like all other geological phenomena, past earthquakes hold the key to the understanding of future ones. Quantificatio n of seismic risk at a site calls for investigating the earthquake aspects of the site region and building a data base. The scope of such investigations is il lustrated in Figure 1 and 2. A more detailed definition of the earthquake problem in engineering design is given elsewhere (Sharma, 1987). The present document discusses the earthquake data base, which is required to support a seismic risk evaluation programme in the context of the existing state of the art. (author). 8 tables, 10 figs., 54 refs
On a correspondence between regular and non-regular operator monotone functions
DEFF Research Database (Denmark)
Gibilisco, P.; Hansen, Frank; Isola, T.
2009-01-01
We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....
Pressure transients across HEPA filters
International Nuclear Information System (INIS)
Gregory, W.; Reynolds, G.; Ricketts, C.; Smith, P.R.
1977-01-01
Nuclear fuel cycle facilities require ventilation for health and safety reasons. High efficiency particulate air (HEPA) filters are located within ventilation systems to trap radioactive dust released in reprocessing and fabrication operations. Pressure transients within the air cleaning systems may be such that the effectiveness of the filtration system is questioned under certain accident conditions. These pressure transients can result from both natural and man-caused phenomena: atmospheric pressure drop caused by a tornado or explosions and nuclear excursions initiate pressure pulses that could create undesirable conditions across HEPA filters. Tornado depressurization is a relatively slow transient as compared to pressure pulses that result from combustible hydrogen-air mixtures. Experimental investigation of these pressure transients across air cleaning equipment has been undertaken by Los Alamos Scientific Laboratory and New Mexico State University. An experimental apparatus has been constructed to impose pressure pulses across HEPA filters. The experimental equipment is described as well as preliminary results using variable pressurization rates. Two modes of filtration of an aerosol injected upstream of the filter is examined. A laser instrumentation for measuring the aerosol release, during the transient, is described
Recent development of transient electronics
Directory of Open Access Journals (Sweden)
Huanyu Cheng
2016-01-01
Full Text Available Transient electronics are an emerging class of electronics with the unique characteristic to completely dissolve within a programmed period of time. Since no harmful byproducts are released, these electronics can be used in the human body as a diagnostic tool, for instance, or they can be used as environmentally friendly alternatives to existing electronics which disintegrate when exposed to water. Thus, the most crucial aspect of transient electronics is their ability to disintegrate in a practical manner and a review of the literature on this topic is essential for understanding the current capabilities of transient electronics and areas of future research. In the past, only partial dissolution of transient electronics was possible, however, total dissolution has been achieved with a recent discovery that silicon nanomembrane undergoes hydrolysis. The use of single- and multi-layered structures has also been explored as a way to extend the lifetime of the electronics. Analytical models have been developed to study the dissolution of various functional materials as well as the devices constructed from this set of functional materials and these models prove to be useful in the design of the transient electronics.
Wide Field Radio Transient Surveys
Bower, Geoffrey
2011-04-01
The time domain of the radio wavelength sky has been only sparsely explored. Nevertheless, serendipitous discovery and results from limited surveys indicate that there is much to be found on timescales from nanoseconds to years and at wavelengths from meters to millimeters. These observations have revealed unexpected phenomena such as rotating radio transients and coherent pulses from brown dwarfs. Additionally, archival studies have revealed an unknown class of radio transients without radio, optical, or high-energy hosts. The new generation of centimeter-wave radio telescopes such as the Allen Telescope Array (ATA) will exploit wide fields of view and flexible digital signal processing to systematically explore radio transient parameter space, as well as lay the scientific and technical foundation for the Square Kilometer Array. Known unknowns that will be the target of future transient surveys include orphan gamma-ray burst afterglows, radio supernovae, tidally-disrupted stars, flare stars, and magnetars. While probing the variable sky, these surveys will also provide unprecedented information on the static radio sky. I will present results from three large ATA surveys (the Fly's Eye survey, the ATA Twenty CM Survey (ATATS), and the Pi GHz Survey (PiGSS)) and several small ATA transient searches. Finally, I will discuss the landscape and opportunities for future instruments at centimeter wavelengths.
Chernobyl reactor transient simulation study
International Nuclear Information System (INIS)
Gaber, F.A.; El Messiry, A.M.
1988-01-01
This paper deals with the Chernobyl nuclear power station transient simulation study. The Chernobyl (RBMK) reactor is a graphite moderated pressure tube type reactor. It is cooled by circulating light water that boils in the upper parts of vertical pressure tubes to produce steam. At equilibrium fuel irradiation, the RBMK reactor has a positive void reactivity coefficient. However, the fuel temperature coefficient is negative and the net effect of a power change depends upon the power level. Under normal operating conditions the net effect (power coefficient) is negative at full power and becomes positive under certain transient conditions. A series of dynamic performance transient analysis for RBMK reactor, pressurized water reactor (PWR) and fast breeder reactor (FBR) have been performed using digital simulator codes, the purpose of this transient study is to show that an accident of Chernobyl's severity does not occur in PWR or FBR nuclear power reactors. This appears from the study of the inherent, stability of RBMK, PWR and FBR under certain transient conditions. This inherent stability is related to the effect of the feed back reactivity. The power distribution stability in the graphite RBMK reactor is difficult to maintain throughout its entire life, so the reactor has an inherent instability. PWR has larger negative temperature coefficient of reactivity, therefore, the PWR by itself has a large amount of natural stability, so PWR is inherently safe. FBR has positive sodium expansion coefficient, therefore it has insufficient stability it has been concluded that PWR has safe operation than FBR and RBMK reactors
Regularity and irreversibility of weekly travel behavior
Kitamura, R.; van der Hoorn, A.I.J.M.
1987-01-01
Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.
Regular and context-free nominal traces
DEFF Research Database (Denmark)
Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca
2017-01-01
Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...
Faster 2-regular information-set decoding
Bernstein, D.J.; Lange, T.; Peters, C.P.; Schwabe, P.; Chee, Y.M.
2011-01-01
Fix positive integers B and w. Let C be a linear code over F 2 of length Bw. The 2-regular-decoding problem is to find a nonzero codeword consisting of w length-B blocks, each of which has Hamming weight 0 or 2. This problem appears in attacks on the FSB (fast syndrome-based) hash function and
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free-path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Regular Gleason Measures and Generalized Effect Algebras
Dvurečenskij, Anatolij; Janda, Jiří
2015-12-01
We study measures, finitely additive measures, regular measures, and σ-additive measures that can attain even infinite values on the quantum logic of a Hilbert space. We show when particular classes of non-negative measures can be studied in the frame of generalized effect algebras.
Regularization of finite temperature string theories
International Nuclear Information System (INIS)
Leblanc, Y.; Knecht, M.; Wallet, J.C.
1990-01-01
The tachyonic divergences occurring in the free energy of various string theories at finite temperature are eliminated through the use of regularization schemes and analytic continuations. For closed strings, we obtain finite expressions which, however, develop an imaginary part above the Hagedorn temperature, whereas open string theories are still plagued with dilatonic divergences. (orig.)
A Sim(2 invariant dimensional regularization
Directory of Open Access Journals (Sweden)
J. Alfaro
2017-09-01
Full Text Available We introduce a Sim(2 invariant dimensional regularization of loop integrals. Then we can compute the one loop quantum corrections to the photon self energy, electron self energy and vertex in the Electrodynamics sector of the Very Special Relativity Standard Model (VSRSM.
Continuum regularized Yang-Mills theory
International Nuclear Information System (INIS)
Sadun, L.A.
1987-01-01
Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions
Gravitational lensing by a regular black hole
International Nuclear Information System (INIS)
Eiroa, Ernesto F; Sendra, Carlos M
2011-01-01
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Gravitational lensing by a regular black hole
Energy Technology Data Exchange (ETDEWEB)
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: eiroa@iafe.uba.ar, E-mail: cmsendra@iafe.uba.ar [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
2011-04-21
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Analytic stochastic regularization and gange invariance
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1986-05-01
A proof that analytic stochastic regularization breaks gauge invariance is presented. This is done by an explicit one loop calculation of the vaccum polarization tensor in scalar electrodynamics, which turns out not to be transversal. The counterterm structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization, are also analysed. (Author) [pt
Annotation of regular polysemy and underspecification
DEFF Research Database (Denmark)
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Stabilization, pole placement, and regular implementability
Belur, MN; Trentelman, HL
In this paper, we study control by interconnection of linear differential systems. We give necessary and sufficient conditions for regular implementability of a-given linear, differential system. We formulate the problems of stabilization and pole placement as problems of finding a suitable,
12 CFR 725.3 - Regular membership.
2010-01-01
... UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person credit....5(b) of this part, and forwarding with its completed application funds equal to one-half of this... 1, 1979, is not required to forward these funds to the Facility until October 1, 1979. (3...
Supervised scale-regularized linear convolutionary filters
DEFF Research Database (Denmark)
Loog, Marco; Lauze, Francois Bernard
2017-01-01
also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabil- ities of our basic...
On regular riesz operators | Raubenheimer | Quaestiones ...
African Journals Online (AJOL)
The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...
Regularized Discriminant Analysis: A Large Dimensional Study
Yang, Xiaoke
2018-04-28
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
Complexity in union-free regular languages
Czech Academy of Sciences Publication Activity Database
Jirásková, G.; Masopust, Tomáš
2011-01-01
Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html
Bit-coded regular expression parsing
DEFF Research Database (Denmark)
Nielsen, Lasse; Henglein, Fritz
2011-01-01
the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
Understanding Great Earthquakes in Japan's Kanto Region
Kobayashi, Reiji; Curewitz, Daniel
2008-10-01
Third International Workshop on the Kanto Asperity Project; Chiba, Japan, 16-19 February 2008; The 1703 (Genroku) and 1923 (Taisho) earthquakes in Japan's Kanto region (M 8.2 and M 7.9, respectively) caused severe damage in the Tokyo metropolitan area. These great earthquakes occurred along the Sagami Trough, where the Philippine Sea slab is subducting beneath Japan. Historical records, paleoseismological research, and geophysical/geodetic monitoring in the region indicate that such great earthquakes will repeat in the future.
Earthquake-triggered landslides in southwest China
X. L. Chen; Q. Zhou; H. Ran; R. Dong
2012-01-01
Southwest China is located in the southeastern margin of the Tibetan Plateau and it is a region of high seismic activity. Historically, strong earthquakes that occurred here usually generated lots of landslides and brought destructive damages. This paper introduces several earthquake-triggered landslide events in this region and describes their characteristics. Also, the historical data of earthquakes with a magnitude of 7.0 or greater, having occurred in this region, is col...
Tetravalent one-regular graphs of order 4p2
DEFF Research Database (Denmark)
Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan
2014-01-01
A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....
Retrospective analysis of the Spitak earthquake
Directory of Open Access Journals (Sweden)
A. K. Tovmassian
1995-06-01
Full Text Available Based on the retrospective analysis of numerous data and studies of the Spitak earthquake the present work at- tempts to shed light on different aspects of that catastrophic seismic event which occurred in Northern Arme- nia on December 7, 1988. The authors follow a chronological order of presentation, namely: changes in geo- sphere, atmosphere, biosphere during the preparation of the Spitak earthquake, foreshocks, main shock, after- shocks, focal mechanisms, historical seismicity; seismotectonic position of the source, strong motion records, site effects; the macroseismic effect, collapse of buildings and structures; rescue activities; earthquake conse- quences; and the lessons of the Spitak earthquake.
Smoking prevalence increases following Canterbury earthquakes.
Erskine, Nick; Daley, Vivien; Stevenson, Sue; Rhodes, Bronwen; Beckert, Lutz
2013-01-01
A magnitude 7.1 earthquake hit Canterbury in September 2010. This earthquake and associated aftershocks took the lives of 185 people and drastically changed residents' living, working, and social conditions. To explore the impact of the earthquakes on smoking status and levels of tobacco consumption in the residents of Christchurch. Semistructured interviews were carried out in two city malls and the central bus exchange 15 months after the first earthquake. A total of 1001 people were interviewed. In August 2010, prior to any earthquake, 409 (41%) participants had never smoked, 273 (27%) were currently smoking, and 316 (32%) were ex-smokers. Since the September 2010 earthquake, 76 (24%) of the 316 ex-smokers had smoked at least one cigarette and 29 (38.2%) had smoked more than 100 cigarettes. Of the 273 participants who were current smokers in August 2010, 93 (34.1%) had increased consumption following the earthquake, 94 (34.4%) had not changed, and 86 (31.5%) had decreased their consumption. 53 (57%) of the 93 people whose consumption increased reported that the earthquake and subsequent lifestyle changes as a reason to increase smoking. 24% of ex-smokers resumed smoking following the earthquake, resulting in increased smoking prevalence. Tobacco consumption levels increased in around one-third of current smokers.
Thermal infrared anomalies of several strong earthquakes.
Wei, Congxin; Zhang, Yuansheng; Guo, Xiao; Hui, Shaoxing; Qin, Manzhong; Zhang, Ying
2013-01-01
In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquake prediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquake prediction and forecasting.
Real Time Earthquake Information System in Japan
Doi, K.; Kato, T.
2003-12-01
An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally
Impact- and earthquake- proof roof structure
International Nuclear Information System (INIS)
Shohara, Ryoichi.
1990-01-01
Building roofs are constituted with roof slabs, an earthquake proof layer at the upper surface thereof and an impact proof layer made of iron-reinforced concrete disposed further thereover. Since the roofs constitute an earthquake proof structure loading building dampers on the upper surface of the slabs by the concrete layer, seismic inputs of earthquakes to the buildings can be moderated and the impact-proof layer is formed, to ensure the safety to external conditions such as earthquakes or falling accidents of airplane in important facilities such as reactor buildings. (T.M.)
A minimalist model of characteristic earthquakes
DEFF Research Database (Denmark)
Vázquez-Prada, M.; González, Á.; Gómez, J.B.
2002-01-01
In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....
Global Significant Earthquake Database, 2150 BC to present
National Oceanic and Atmospheric Administration, Department of Commerce — The Significant Earthquake Database is a global listing of over 5,700 earthquakes from 2150 BC to the present. A significant earthquake is classified as one that...
HEDL experimental transient overpower program
International Nuclear Information System (INIS)
Hikido, T.; Culley, G.E.
1976-01-01
HEDL is conducting a series of experiments to evaluate the performance of Fast Flux Test Facility (FFTF) prototypic fuel pins up to the point of cladding breach. A primary objective of the program is to demonstrate the adequacy of fuel pin and Plant Protective System (PPS) designs for terminated transients. Transient tests of prototypic FFTF fuel pins previously irradiated in the Experimental Breeder Reactor-II (EBR-II) have demonstrated the adequacy of the PPS and fuel pin designs and indicate that a very substantial margin exists between PPS-terminated transients and that required to produce fuel pin cladding failure. Additional experiments are planned to extend the data base to high burnup, high fluence fuel pin specimens
Transient voltage oscillations in coils
International Nuclear Information System (INIS)
Chowdhuri, P.
1985-01-01
Magnet coils may be excited into internal voltage oscillations by transient voltages. Such oscillations may electrically stress the magnet's dielectric components to many times its normal stress. This may precipitate a dielectric failure, and the attendant prolonged loss of service and costly repair work. Therefore, it is important to know the natural frequencies of oscillations of a magnet during the design stage, and to determine whether the expected switching transient voltages can excite the magnet into high-voltage internal oscillations. The series capacitance of a winding significantly affects its natural frequencies. However, the series capacitance is difficult to calculate, because it may comprise complex capacitance network, consisting of intra- and inter-coil turn-to-turn capacitances of the coil sections. A method of calculating the series capacitance of a winding is proposed. This method is rigorous but simple to execute. The time-varying transient voltages along the winding are also calculated
Transient analysis of DTT rakes
International Nuclear Information System (INIS)
Kamath, P.S.; Lahey, R.T. Jr.
1981-01-01
This paper presents an analytical model for the determination of the cross-sectionally averaged transient mass flux of a two-phase fluid flowing in a conduit instrumented by a Drag-Disk Turbine Transducer (DTT) Rake and a multibeam gamma densitometer. Parametric studies indicate that for a typical blowdown transient, dynamic effects such as rotor inertia can be important for the turbine-meter. In contrast, for the drag-disk, a frequency response analysis showed that the quasisteady solution is valid below a forcing frequency of about 10 Hz, which is faster than the time scale normally encountered during blowdowns. The model showed reasonably good agreement with full scale transient rake data, where the flow regimes were mostly homogeneous or stratified, thus indicating that the model is suitable for the analysis of a DTT rake. (orig.)
Can static regular black holes form from gravitational collapse?
International Nuclear Information System (INIS)
Zhang, Yiyang; Zhu, Yiwei; Modesto, Leonardo; Bambi, Cosimo
2015-01-01
Starting from the Oppenheimer-Snyder model, we know how in classical general relativity the gravitational collapse of matter forms a black hole with a central spacetime singularity. It is widely believed that the singularity must be removed by quantum-gravity effects. Some static quantum-inspired singularity-free black hole solutions have been proposed in the literature, but when one considers simple examples of gravitational collapse the classical singularity is replaced by a bounce, after which the collapsing matter expands for ever. We may expect three possible explanations: (i) the static regular black hole solutions are not physical, in the sense that they cannot be realized in Nature, (ii) the final product of the collapse is not unique, but it depends on the initial conditions, or (iii) boundary effects play an important role and our simple models miss important physics. In the latter case, after proper adjustment, the bouncing solution would approach the static one. We argue that the ''correct answer'' may be related to the appearance of a ghost state in de Sitter spacetimes with super Planckian mass. Our black holes have indeed a de Sitter core and the ghost would make these configurations unstable. Therefore we believe that these black hole static solutions represent the transient phase of a gravitational collapse but never survive as asymptotic states. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Lu, S; Streit, R D; Chou, C K
1980-01-01
This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10{sup -12}). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)
International Nuclear Information System (INIS)
Lu, S.; Streit, R.D.; Chou, C.K.
1980-01-01
This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10 -12 ). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)
Earthquake Risk Management of Underground Lifelines in the Urban Area of Catania
International Nuclear Information System (INIS)
Grasso, S.; Maugeri, M.
2008-01-01
Lifelines typically include the following five utility networks: potable water, sewage natural gas, electric power, telecommunication and transportation system. The response of lifeline systems, like gas and water networks, during a strong earthquake, can be conveniently evaluated with the estimated average number of ruptures per km of pipe. These ruptures may be caused either by fault ruptures crossing, or by permanent deformations of the soil mass (landslides, liquefaction), or by transient soil deformations caused by seismic wave propagation. The possible consequences of damaging earthquakes on transportation systems may be the reduction or the interruption of traffic flow, as well as the impact on the emergency response and on the recovery assistance. A critical element in the emergency management is the closure of roads due to fallen obstacles and debris of collapsed buildings.The earthquake-induced damage to buried pipes is expressed in terms of repair rate (RR), defined as the number of repairs divided by the pipe length (km) exposed to a particular level of seismic demand; this number is a function of the pipe material (and joint type), of the pipe diameter and of the ground shaking level, measured in terms of peak horizontal ground velocity (PGV) or permanent ground displacement (PGD). The development of damage algorithms for buried pipelines is primarily based on empirical evidence, tempered with engineering judgment and sometimes by analytical formulations.For the city of Catania, in the present work use has been made of the correlation between RR and peak horizontal ground velocity by American Lifelines Alliance (ALA, 2001), for the verifications of main buried pipelines. The performance of the main buried distribution networks has been evaluated for the Level I earthquake scenario (January 11, 1693 event I = XI, M 7.3) and for the Level II earthquake scenario (February 20, 1818 event I = IX, M 6.2).Seismic damage scenario of main gas pipelines and
Transient analysis of multicavity klystrons
International Nuclear Information System (INIS)
Lavine, T.L.; Miller, R.H.; Morton, P.L.; Ruth, R.D.
1988-09-01
We describe a model for analytic analysis of transients in multicavity klystron output power and phase. Cavities are modeled as resonant circuits, while bunching of the beam is modeled using linear space-charge wave theory. Our analysis has been implemented in a computer program which we use in designing multicavity klystrons with stable output power and phase. We present as examples transient analysis of a relativistic klystron using a magnetic pulse compression modulator, and of a conventional klystron designed to use phase shifting techniques for RF pulse compression. 4 refs., 4 figs
Transient formation of forbidden lines
International Nuclear Information System (INIS)
Rosmej, F.B.; Rosmej, O.N.
1996-01-01
An explanation of anomalously long time scales in the transient formation of forbidden lines is proposed. The concept is based on a collisionally induced density dependence of the relaxation times of metastable level populations in transient plasma. Generalization leads to an incorporation of diffusion phenomena. We demonstrate this new concept for the simplest atomic system: the He-like isoelectronic sequence. A new interpretation of the observed long duration and anomalously high intensity of spin-forbidden emission in hot plasmas is given. (author)
Transient formation of forbidden lines
Energy Technology Data Exchange (ETDEWEB)
Rosmej, F.B. [Bochum Univ., Ruhr (Germany). Inst. fuer Experimentalphysik V; Rosmej, O.N. [VNIIFTRI, Moscow Region (Russian Federation). MISDC
1996-05-14
An explanation of anomalously long time scales in the transient formation of forbidden lines is proposed. The concept is based on a collisionally induced density dependence of the relaxation times of metastable level populations in transient plasma. Generalization leads to an incorporation of diffusion phenomena. We demonstrate this new concept for the simplest atomic system: the He-like isoelectronic sequence. A new interpretation of the observed long duration and anomalously high intensity of spin-forbidden emission in hot plasmas is given. (author).
A Catalog of Coronal "EIT Wave" Transients
Thompson, B. J.; Myers, D. C.
2009-01-01
Solar and Heliospheric Observatory (SOHO) Extreme ultraviolet Imaging Telescope (EIT) data have been visually searched for coronal "EIT wave" transients over the period beginning from 1997 March 24 and extending through 1998 June 24. The dates covered start at the beginning of regular high-cadence (more than one image every 20 minutes) observations, ending at the four-month interruption of SOHO observations in mid-1998. One hundred and seventy six events are included in this catalog. The observations range from "candidate" events, which were either weak or had insufficient data coverage, to events which were well defined and were clearly distinguishable in the data. Included in the catalog are times of the EIT images in which the events are observed, diagrams indicating the observed locations of the wave fronts and associated active regions, and the speeds of the wave fronts. The measured speeds of the wave fronts varied from less than 50 to over 700 km s(exp -1) with "typical" speeds of 200-400 km s(exp -1).
A CATALOG OF CORONAL 'EIT WAVE' TRANSIENTS
International Nuclear Information System (INIS)
Thompson, B. J.; Myers, D. C.
2009-01-01
Solar and Heliospheric Observatory (SOHO) Extreme ultraviolet Imaging Telescope (EIT) data have been visually searched for coronal 'EIT wave' transients over the period beginning from 1997 March 24 and extending through 1998 June 24. The dates covered start at the beginning of regular high-cadence (more than 1 image every 20 minutes) observations, ending at the four-month interruption of SOHO observations in mid-1998. One hundred and seventy six events are included in this catalog. The observations range from 'candidate' events, which were either weak or had insufficient data coverage, to events which were well defined and were clearly distinguishable in the data. Included in the catalog are times of the EIT images in which the events are observed, diagrams indicating the observed locations of the wave fronts and associated active regions, and the speeds of the wave fronts. The measured speeds of the wave fronts varied from less than 50 to over 700 km s -1 with 'typical' speeds of 200-400 km s -1 .
Seismicity map tools for earthquake studies
Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos
2014-05-01
We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.
Spatial Evaluation and Verification of Earthquake Simulators
Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.
2017-06-01
In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.
Save, H.; Bettadpur, S. V.
2013-12-01
It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.
Energy Technology Data Exchange (ETDEWEB)
Wu, Bo; Wu, Zhenghui; Tam, Hoi Lam; Zhu, Furong, E-mail: frzhu@hkbu.edu.hk [Department of Physics, Institute of Advanced Materials, and Institute of Research and Continuing Education (Shenzhen), Hong Kong Baptist University, 224 Waterloo Road, Kowloon Tong, NT (Hong Kong)
2014-09-08
An opposite interfacial exciton dissociation behavior at the metal (Al)/organic cathode interface in regular and inverted organic solar cells (OSCs) was analyzed using transient photocurrent measurements. It is found that Al/organic contact in regular OSCs, made with the blend layer of poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b′]dithiophene-2,6-diyl] -[3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b]-thiophenediyl
Geirsson, H.; La Femina, P. C.; DeMets, C.; Mattioli, G. S.; Hernández, D.
2013-05-01
We investigate the co-seismic deformation of two significant earthquakes that occurred along the Middle America trench in 2012. The August 27 Mw 7.3 El Salvador and September 5 Mw 7.6 Nicoya Peninsula, Costa Rica earthquakes, were examined using a combination of episodic and continuous Global Positioning System (GPS) data. USGS finite fault models based on seismic data predict fundamentally different characteristics for the two ruptures. The El Salvador event occurred in a historical seismic gap and on the shallow segment of the Middle America Trench main thrust, rupturing a large area, but with a low magnitude of slip. A small tsunami was observed along the coast in Nicaragua and El Salvador, additionally indicating near-trench rupture. Conversely, the Nicoya, Costa Rica earthquake was predicted to have an order of magnitude higher slip on a spatially smaller patch deeper on the main thrust. We present results from episodic and continuous geodetic GPS measurements made in conjunction with the two earthquakes, including data from newly installed COCONet (Continuously Operating Caribbean GPS Observational Network) sites. Episodic GPS measurements made in El Salvador, Honduras, and Nicaragua following the earthquakes, allow us to estimate the co-seismic deformation field from both earthquakes. Because of the small magnitude of the El Salvador earthquake and its shallow rupture the observed co-seismic deformation is small (earthquake occurred directly beneath a seismic and geodetic network specifically designed to capture such events. The observed displacements exceeded 0.5 m and there is a significant post-seismic transient following the earthquake. We use our estimated co-seismic offsets for both earthquakes to model the magnitude and spatial variability of slip for these two events.
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
Stream Processing Using Grammars and Regular Expressions
DEFF Research Database (Denmark)
Rasmussen, Ulrik Terp
disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...... Kleenex, a language for expressing high-performance streaming string processing programs as regular grammars with embedded semantic actions, and its compilation to streaming string transducers with worst-case linear-time performance. Its underlying theory is based on transducer decomposition into oracle...
Describing chaotic attractors: Regular and perpetual points
Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz
2018-03-01
We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.
Chaos regularization of quantum tunneling rates
International Nuclear Information System (INIS)
Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward
2011-01-01
Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Contour Propagation With Riemannian Elasticity Regularization
DEFF Research Database (Denmark)
Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.
2011-01-01
Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...
Thin accretion disk around regular black hole
Directory of Open Access Journals (Sweden)
QIU Tianqi
2014-08-01
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Convex nonnegative matrix factorization with manifold regularization.
Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong
2015-03-01
Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Foreshock sequences and short-term earthquake predictability on East Pacific Rise transform faults.
McGuire, Jeffrey J; Boettcher, Margaret S; Jordan, Thomas H
2005-03-24
East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.
Transient response and radiation dose estimates for breaches to a spent fuel processing facility
Energy Technology Data Exchange (ETDEWEB)
Solbrig, Charles W., E-mail: soltechco@aol.com; Pope, Chad; Andrus, Jason
2014-08-15
Highlights: • We model doses received from a nuclear fuel facility from boundary leaks due to an earthquake. • The supplemental exhaust system (SES) starts after breach causing air to be sucked into the cell. • Exposed metal fuel burns increasing pressure and release of radioactive contamination. • Facility releases are small and much less than the limits showing costly refits are unnecessary. • The method presented can be used in other nuclear fuel processing facilities. - Abstract: This paper describes the analysis of the design basis accident for Idaho National Laboratory Fuel Conditioning Facility (FCF). The facility is used to process spent metallic nuclear fuel. This analysis involves a model of the transient behavior of the FCF inert atmosphere hot cell following an earthquake initiated breach of pipes passing through the cell boundary. Such breaches allow the introduction of air and subsequent burning of pyrophoric metals. The model predicts the pressure, temperature, volumetric releases, cell heat transfer, metal fuel combustion, heat generation rates, radiological releases and other quantities. The results show that releases from the cell are minimal and satisfactory for safety. This analysis method should be useful in other facilities that have potential for damage from an earthquake and could eliminate the need to back fit facilities with earthquake proof boundaries or lessen the cost of new facilities.
Transient response and radiation dose estimates for breaches to a spent fuel processing facility
International Nuclear Information System (INIS)
Solbrig, Charles W.; Pope, Chad; Andrus, Jason
2014-01-01
Highlights: • We model doses received from a nuclear fuel facility from boundary leaks due to an earthquake. • The supplemental exhaust system (SES) starts after breach causing air to be sucked into the cell. • Exposed metal fuel burns increasing pressure and release of radioactive contamination. • Facility releases are small and much less than the limits showing costly refits are unnecessary. • The method presented can be used in other nuclear fuel processing facilities. - Abstract: This paper describes the analysis of the design basis accident for Idaho National Laboratory Fuel Conditioning Facility (FCF). The facility is used to process spent metallic nuclear fuel. This analysis involves a model of the transient behavior of the FCF inert atmosphere hot cell following an earthquake initiated breach of pipes passing through the cell boundary. Such breaches allow the introduction of air and subsequent burning of pyrophoric metals. The model predicts the pressure, temperature, volumetric releases, cell heat transfer, metal fuel combustion, heat generation rates, radiological releases and other quantities. The results show that releases from the cell are minimal and satisfactory for safety. This analysis method should be useful in other facilities that have potential for damage from an earthquake and could eliminate the need to back fit facilities with earthquake proof boundaries or lessen the cost of new facilities
Searching for geodetic transient slip signals along the Parkfield segment of the San Andreas Fault
Rousset, B.; Burgmann, R.
2017-12-01
The Parkfield section of the San Andreas fault is at the transition between a segment locked since the 1857 Mw 7.9 Fort Tejon earthquake to its south and a creeping segment to the north. It is particularly well instrumented since it is the many previous studies have focused on studying the coseismic and postseismic phases of the two most recent earthquake cycles, the interseismic phase is exhibiting interesting dynamics at the down-dip edge of the seismogenic zone, characterized by a very large number of low frequency earthquakes (LFE) with different behaviors depending on location. Interseismic fault creep rates appear to vary over a wide range of spatial and temporal scales, from the Earth's surface to the base of crust. In this study, we take advantage of the dense Global Positioning System (GPS) network, with 77 continuous stations located within a circle of radius 80 km centered on Parkfield. We correct these time series for the co- and postseismic signals of the 2003 Mw 6.3 San Simeon and 2004 Mw 6.0 Parkfield earthquakes. We then cross-correlate the residual time series with synthetic slow-slip templates following the approach of Rousset et al. (2017). Synthetic tests with transient events contained in GPS time series with realistic noise show the limit of detection of the method. In the application with real GPS time series, the highest correlation amplitudes are compared with micro-seismicity rates, as well as tremor and LFE observations.
GEM - The Global Earthquake Model
Smolka, A.
2009-04-01
Over 500,000 people died in the last decade due to earthquakes and tsunamis, mostly in the developing world, where the risk is increasing due to rapid population growth. In many seismic regions, no hazard and risk models exist, and even where models do exist, they are intelligible only by experts, or available only for commercial purposes. The Global Earthquake Model (GEM) answers the need for an openly accessible risk management tool. GEM is an internationally sanctioned public private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) which will establish an authoritative standard for calculating and communicating earthquake hazard and risk, and will be designed to serve as the critical instrument to support decisions and actions that reduce earthquake losses worldwide. GEM will integrate developments on the forefront of scientific and engineering knowledge of earthquakes, at global, regional and local scale. The work is organized in three modules: hazard, risk, and socio-economic impact. The hazard module calculates probabilities of earthquake occurrence and resulting shaking at any given location. The risk module calculates fatalities, injuries, and damage based on expected shaking, building vulnerability, and the distribution of population and of exposed values and facilities. The socio-economic impact module delivers tools for making educated decisions to mitigate and manage risk. GEM will be a versatile online tool, with open source code and a map-based graphical interface. The underlying data will be open wherever possible, and its modular input and output will be adapted to multiple user groups: scientists and engineers, risk managers and decision makers in the public and private sectors, and the public-at- large. GEM will be the first global model for seismic risk assessment at a national and regional scale, and aims to achieve broad scientific participation and independence. Its development will occur in a
A short proof of increased parabolic regularity
Directory of Open Access Journals (Sweden)
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
2008-01-01
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Analytic stochastic regularization and gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Gomes, M.; Lima-Santos, A.
1987-04-01
We prove that analytic stochatic regularization braks gauge invariance. This is done by an explicit one loop calculation of the two three and four point vertex functions of the gluon field in scalar chromodynamics, which turns out not to be geuge invariant. We analyse the counter term structure, Langevin equations and the construction of composite operators in the general framework of stochastic quantization. (author) [pt
Preconditioners for regularized saddle point matrices
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2011-01-01
Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml
Analytic stochastic regularization: gauge and supersymmetry theories
International Nuclear Information System (INIS)
Abdalla, M.C.B.
1988-01-01
Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt