WorldWideScience

Sample records for earthquake size distribution

  1. Break of slope in earthquake size distribution and creep rate along the San Andreas Fault system

    Science.gov (United States)

    Shebalin, P.; Narteau, C.; Vorobieva, I.

    2017-12-01

    Crustal faults accommodate slip either by a succession of earthquakes or continuous slip, andin most instances, both these seismic and aseismic processes coexist. Recorded seismicity and geodeticmeasurements are therefore two complementary data sets that together document ongoing deformationalong active tectonic structures. Here we study the influence of stable sliding on earthquake statistics.We show that creep along the San Andreas Fault is responsible for a break of slope in the earthquake sizedistribution. This slope increases with an increasing creep rate for larger magnitude ranges, whereas itshows no systematic dependence on creep rate for smaller magnitude ranges. This is interpreted as a deficitof large events under conditions of faster creep where seismic ruptures are less likely to propagate. Theseresults suggest that the earthquake size distribution does not only depend on the level of stress but also onthe type of deformation.

  2. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  3. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  4. Measuring the size of an earthquake

    Science.gov (United States)

    Spence, W.; Sipkin, S.A.; Choy, G.L.

    1989-01-01

    Earthquakes range broadly in size. A rock-burst in an Idaho silver mine may involve the fracture of 1 meter of rock; the 1965 Rat Island earthquake in the Aleutian arc involved a 650-kilometer length of the Earth's crust. Earthquakes can be even smaller and even larger. If an earthquake is felt or causes perceptible surface damage, then its intensity of shaking can be subjectively estimated. But many large earthquakes occur in oceanic areas or at great focal depths and are either simply not felt or their felt pattern does not really indicate their true size.

  5. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  6. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    Science.gov (United States)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006

  7. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, Uri S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  8. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Science.gov (United States)

    Gao, Meng-Tan; Jin, Xue-Shen; An, Wei-Ping; Lü, Xiao-Jian

    2004-07-01

    The geography information system of the 1303 Hongton M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studies. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  9. Distribution of incremental static stress caused by earthquakes

    Directory of Open Access Journals (Sweden)

    Y. Y. Kagan

    1994-01-01

    Full Text Available Theoretical calculations, simulations and measurements of rotation of earthquake focal mechanisms suggest that the stress in earthquake focal zones follows the Cauchy distribution which is one of the stable probability distributions (with the value of the exponent α equal to 1. We review the properties of the stable distributions and show that the Cauchy distribution is expected to approximate the stress caused by earthquakes occurring over geologically long intervals of a fault zone development. However, the stress caused by recent earthquakes recorded in instrumental catalogues, should follow symmetric stable distributions with the value of α significantly less than one. This is explained by a fractal distribution of earthquake hypocentres: the dimension of a hypocentre set, ��, is close to zero for short-term earthquake catalogues and asymptotically approaches 2¼ for long-time intervals. We use the Harvard catalogue of seismic moment tensor solutions to investigate the distribution of incremental static stress caused by earthquakes. The stress measured in the focal zone of each event is approximated by stable distributions. In agreement with theoretical considerations, the exponent value of the distribution approaches zero as the time span of an earthquake catalogue (ΔT decreases. For large stress values α increases. We surmise that it is caused by the δ increase for small inter-earthquake distances due to location errors.

  10. Correlation between hypocenter depth, antecedent precipitation and earthquake-induced landslide spatial distribution

    Science.gov (United States)

    Fukuoka, Hiroshi; Watanabe, Eisuke

    2017-04-01

    Since Keefer published the paper on earthquake magnitude and affected area, maximum epicentral/fault distance of induced landslide distribution in 1984, showing the envelope of plots, a lot of studies on this topic have been conducted. It has been generally supposed that landslides have been triggered by shallow quakes and more landslides are likely to occur with heavy rainfalls immediately before the quake. In order to confirm this, we have collected 22 case records of earthquake-induced landslide distribution in Japan and examined the effect of hypocenter depth and antecedent precipitation. Earthquake magnitude by JMA (Japan Meteorological Agency) of the cases are from 4.5 to 9.0. Analysis on hycpocenter depth showed the deeper quake cause wider distribution. Antecedent precipitation was evaluated using the Soil Water Index (SWI), which was developed by JMA for issuing landslide alert. We could not find meaningful correlation between SWI and the earthquake-induced landslide distribution. Additionally, we found that smaller minimum size of collected landslides results in wider distribution especially between 1,000 to 100,000 m2.

  11. Analysis of the space, time and energy distribution of Vrancea earthquakes

    International Nuclear Information System (INIS)

    Radulian, M.; Popa, M.

    1995-01-01

    Statistical analysis of fractal properties of space, time and energy distributions of Vrancea intermediate-depth earthquakes is performed on a homogeneous and complete data set. All events with magnitudes M L >2.5 which occurred from 1974 to 1992 are considered. The 19-year time interval includes the major earthquakes of March 4, 1977, August 26, 1986 and May 30, 1990. The subducted plate, lying between 60 km and 180 km depth, is divided into four active zones with characteristic seismic activities. The correlations between the parameters defining the seismic activities in these zones are studied. The predictive properties of the parameters related to the stress distribution on the fault are analysed. The significant anomalies in time and size distributions of earthquakes are emphasized. The correlations between spatial distribution (fractal dimension), the frequency-magnitude distribution (b slope value) and the high-frequency energy radiated by the source (fall off of the displacement spectra) are studied both at the scale of the whole seismogenic volume and the scale of a specific active zone. The results of this study for the Vrancea earthquakes bring evidence in favour of the seismic source model with hierarchical inhomogeneities (Frankel, 1991) (Author) 8 Figs., 2 Tabs., 5 Refs

  12. Global Earthquake Hazard Frequency and Distribution

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Earthquake Hazard Frequency and Distribution is a 2.5 minute grid utilizing Advanced National Seismic System (ANSS) Earthquake Catalog data of actual...

  13. Coulomb Mechanics And Landscape Geometry Explain Landslide Size Distribution

    Science.gov (United States)

    Jeandet, L.; Steer, P.; Lague, D.; Davy, P.

    2017-12-01

    It is generally observed that the dimensions of large bedrock landslides follow power-law scaling relationships. In particular, the non-cumulative frequency distribution (PDF) of bedrock landslide area is well characterized by a negative power-law above a critical size, with an exponent 2.4. However, the respective role of bedrock mechanical properties, landscape shape and triggering mechanisms on the scaling properties of landslide dimensions are still poorly understood. Yet, unravelling the factors that control this distribution is required to better estimate the total volume of landslides triggered by large earthquakes or storms. To tackle this issue, we develop a simple probabilistic 1D approach to compute the PDF of rupture depths in a given landscape. The model is applied to randomly sampled points along hillslopes of studied digital elevation models. At each point location, the model determines the range of depth and angle leading to unstable rupture planes, by applying a simple Mohr-Coulomb rupture criterion only to the rupture planes that intersect downhill surface topography. This model therefore accounts for both rock mechanical properties, friction and cohesion, and landscape shape. We show that this model leads to realistic landslide depth distribution, with a power-law arising when the number of samples is high enough. The modeled PDF of landslide size obtained for several landscapes match the ones from earthquakes-driven landslides catalogues for the same landscape. In turn, this allows us to invert landslide effective mechanical parameters, friction and cohesion, associated to those specific events, including Chi-Chi, Wenchuan, Niigata and Gorkha earthquakes. The cohesion and friction ranges (25-35 degrees and 5-20 kPa) are in good agreement with previously inverted values. Our results demonstrate that reduced complexity mechanics is efficient to model the distribution of unstable depths, and show the role of landscape variability in landslide size

  14. The size, distribution, and mobility of landslides caused by the 2015 Mw7.8 Gorkha earthquake, Nepal

    Science.gov (United States)

    Roback, Kevin; Clark, Marin K.; West, A. Joshua; Zekkos, Dimitrios; Li, Gen; Gallen, Sean F.; Chamlagain, Deepak; Godt, Jonathan W.

    2018-01-01

    Coseismic landslides pose immediate and prolonged hazards to mountainous communities, and provide a rare opportunity to study the effect of large earthquakes on erosion and sediment budgets. By mapping landslides using high-resolution satellite imagery, we find that the 25 April 2015 Mw7.8 Gorkha earthquake and aftershock sequence produced at least 25,000 landslides throughout the steep Himalayan Mountains in central Nepal. Despite early reports claiming lower than expected landslide activity, our results show that the total number, area, and volume of landslides associated with the Gorkha event are consistent with expectations, when compared to prior landslide-triggering earthquakes around the world. The extent of landsliding mimics the extent of fault rupture along the east-west trace of the Main Himalayan Thrust and increases eastward following the progression of rupture. In this event, maximum modeled Peak Ground Acceleration (PGA) and the steepest topographic slopes of the High Himalaya are not spatially coincident, so it is not surprising that landslide density correlates neither with PGA nor steepest slopes on their own. Instead, we find that the highest landslide density is located at the confluence of steep slopes, high mean annual precipitation, and proximity to the deepest part of the fault rupture from which 0.5-2 Hz seismic energy originated. We suggest that landslide density was determined by a combination of earthquake source characteristics, slope distributions, and the influence of precipitation on rock strength via weathering and changes in vegetation cover. Determining the relative contribution of each factor will require further modeling and better constrained seismic parameters, both of which are likely to be developed in the coming few years as post-event studies evolve. Landslide mobility, in terms of the ratio of runout distance to fall height, is comparable to small volume landslides in other settings, and landslide volume-runout scaling is

  15. Spatial and size distributions of garnets grown in a pseudotachylyte generated during a lower crust earthquake

    Science.gov (United States)

    Clerc, Adriane; Renard, François; Austrheim, Håkon; Jamtveit, Bjørn

    2018-05-01

    In the Bergen Arc, western Norway, rocks exhumed from the lower crust record earthquakes that formed during the Caledonian collision. These earthquakes occurred at about 30-50 km depth under granulite or amphibolite facies metamorphic conditions. Coseismic frictional heating produced pseudotachylytes in this area. We describe pseudotachylytes using field data to infer earthquake magnitude (M ≥ 6.6), low dynamic friction during rupture propagation (μd earthquake arrest. High resolution 3D X-ray microtomography imaging reveals the microstructure of a pseudotachylyte sample, including numerous garnets and their corona of plagioclase that we infer have crystallized in the pseudotachylyte. These garnets 1) have dendritic shapes and are surrounded by plagioclase coronae almost fully depleted in iron, 2) have a log-normal volume distribution, 3) increase in volume with increasing distance away from the pseudotachylyte-host rock boundary, and 4) decrease in number with increasing distance away from the pseudotachylyte -host rock boundary. These characteristics indicate fast mineral growth, likely within seconds. We propose that these new quantitative criteria may assist in the unambiguous identification of pseudotachylytes in the field.

  16. Tidal controls on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, S.; Yabe, S.; Tanaka, Y.

    2016-12-01

    The possibility that tidal stresses can trigger earthquakes is a long-standing issue in seismology. Except in some special cases, a causal relationship between seismicity and the phase of tidal stress has been rejected on the basis of studies using many small events. However, recently discovered deep tectonic tremors are highly sensitive to tidal stress levels, with the relationship being governed by a nonlinear law according to which the tremor rate increases exponentially with increasing stress; thus, slow deformation (and the probability of earthquakes) may be enhanced during periods of large tidal stress. Here, we show the influence of tidal stress on seismicity by calculating histories of tidal shear stress during the 2-week period before earthquakes. Very large earthquakes tend to occur near the time of maximum tidal stress, but this tendency is not obvious for small earthquakes. Rather, we found that tidal stress controls the earthquake size-frequency statistics; i.e., the fraction of large events increases (i.e. the b-value of the Gutenberg-Richter relation decreases) as the tidal shear stress increases. This correlation is apparent in data from the global catalog and in relatively homogeneous regional catalogues of earthquakes in Japan. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. Our findings indicate that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. This finding has clear implications for probabilistic earthquake forecasting.

  17. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran Kumar; Mai, Paul Martin

    2016-01-01

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  18. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  19. The size, distribution, and mobility of landslides caused by the 2015 Mw7.8 Gorkha earthquake, Nepal

    Science.gov (United States)

    Roback, Kevin; Clark, Marin K.; West, A. Joshua; Zekkos, Dimitrios; Li, Gen; Gallen, Sean F.; Chamlagain, Deepak; Godt, Jonathan W.

    2018-01-01

    Coseismic landslides pose immediate and prolonged hazards to mountainous communities, and provide a rare opportunity to study the effect of large earthquakes on erosion and sediment budgets. By mapping landslides using high-resolution satellite imagery, we find that the 25 April 2015 Mw7.8 Gorkha earthquake and aftershock sequence produced at least 25,000 landslides throughout the steep Himalayan Mountains in central Nepal. Despite early reports claiming lower than expected landslide activity, our results show that the total number, area, and volume of landslides associated with the Gorkha event are consistent with expectations, when compared to prior landslide-triggering earthquakes around the world. The extent of landsliding mimics the extent of fault rupture along the east-west trace of the Main Himalayan Thrust and increases eastward following the progression of rupture. In this event, maximum modeled Peak Ground Acceleration (PGA) and the steepest topographic slopes of the High Himalaya are not spatially coincident, so it is not surprising that landslide density correlates neither with PGA nor steepest slopes on their own. Instead, we find that the highest landslide density is located at the confluence of steep slopes, high mean annual precipitation, and proximity to the deepest part of the fault rupture from which 0.5–2 Hz seismic energy originated. We suggest that landslide density was determined by a combination of earthquake source characteristics, slope distributions, and the influence of precipitation on rock strength via weathering and changes in vegetation cover. Determining the relative contribution of each factor will require further modeling and better constrained seismic parameters, both of which are likely to be developed in the coming few years as post-event studies evolve. Landslide mobility, in terms of the ratio of runout distance to fall height, is comparable to small volume landslides in other settings, and landslide volume-runout scaling

  20. ON POTENTIAL REPRESENTATIONS OF THE DISTRIBUTION LAW OF RARE STRONGEST EARTHQUAKES

    Directory of Open Access Journals (Sweden)

    M. V. Rodkin

    2014-01-01

    Full Text Available Assessment of long-term seismic hazard is critically dependent on the behavior of tail of the distribution function of rare strongest earthquakes. Analyses of empirical data cannot however yield the credible solution of this problem because the instrumental catalogs of earthquake are available only for a rather short time intervals, and the uncertainty in estimations of magnitude of paleoearthquakes is high. From the available data, it was possible only to propose a number of alternative models characterizing the distribution of rare strongest earthquakes. There are the following models: the model based on theGuttenberg – Richter law suggested to be valid until a maximum possible seismic event (Мmах, models of 'bend down' of earthquake recurrence curve, and the characteristic earthquakes model. We discuss these models from the general physical concepts supported by the theory of extreme values (with reference to the generalized extreme value (GEV distribution and the generalized Pareto distribution (GPD and the multiplicative cascade model of seismic regime. In terms of the multiplicative cascade model, seismic regime is treated as a large number of episodes of avalanche-type relaxation of metastable states which take place in a set of metastable sub-systems.The model of magnitude-unlimited continuation of the Guttenberg – Richter law is invalid from the physical point of view because it corresponds to an infinite mean value of seismic energy and infinite capacity of the process generating seismicity. A model of an abrupt cut of this law by a maximum possible event, Мmах is not fully logical either.A model with the 'bend-down' of earthquake recurrence curve can ensure both continuity of the distribution law and finiteness of seismic energy value. Results of studies with the use of the theory of extreme values provide a convincing support to the model of 'bend-down' of earthquakes’ recurrence curve. Moreover they testify also that the

  1. Determining on-fault earthquake magnitude distributions from integer programming

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions

  2. Earthquake hazard evaluation for Switzerland

    International Nuclear Information System (INIS)

    Ruettener, E.

    1995-01-01

    Earthquake hazard analysis is of considerable importance for Switzerland, a country with moderate seismic activity but high economic values at risk. The evaluation of earthquake hazard, i.e. the determination of return periods versus ground motion parameters, requires a description of earthquake occurrences in space and time. In this study the seismic hazard for major cities in Switzerland is determined. The seismic hazard analysis is based on historic earthquake records as well as instrumental data. The historic earthquake data show considerable uncertainties concerning epicenter location and epicentral intensity. A specific concept is required, therefore, which permits the description of the uncertainties of each individual earthquake. This is achieved by probability distributions for earthquake size and location. Historical considerations, which indicate changes in public earthquake awareness at various times (mainly due to large historical earthquakes), as well as statistical tests have been used to identify time periods of complete earthquake reporting as a function of intensity. As a result, the catalog is judged to be complete since 1878 for all earthquakes with epicentral intensities greater than IV, since 1750 for intensities greater than VI, since 1600 for intensities greater than VIII, and since 1300 for intensities greater than IX. Instrumental data provide accurate information about the depth distribution of earthquakes in Switzerland. In the Alps, focal depths are restricted to the uppermost 15 km of the crust, whereas below the northern Alpine foreland earthquakes are distributed throughout the entire crust (30 km). This depth distribution is considered in the final hazard analysis by probability distributions. (author) figs., tabs., refs

  3. Non extensivity and frequency magnitude distribution of earthquakes

    International Nuclear Information System (INIS)

    Sotolongo-Costa, Oscar; Posadas, Antonio

    2003-01-01

    Starting from first principles (in this case a non-extensive formulation of the maximum entropy principle) and a phenomenological approach, an explicit formula for the magnitude distribution of earthquakes is derived, which describes earthquakes in the whole range of magnitudes. The Gutenberg-Richter law appears as a particular case of the obtained formula. Comparison with geophysical data gives a very good agreement

  4. Geophysical Anomalies and Earthquake Prediction

    Science.gov (United States)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require

  5. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  6. Spatial distribution of earthquake hypocenters in the Crimea—Black Sea region

    Science.gov (United States)

    Burmin, V. Yu; Shumlianska, L. O.

    2018-03-01

    Some aspects of the seismicity the Crime—Black Sea region are considered on the basis of the catalogued data on earthquakes that have occurred between 1970 and 2012. The complete list of the Crimean earthquakes for this period contains about 2140 events with magnitude ranging from -1.5 to 5.5. Bulletins contain information about compressional and shear waves arrival times regarding nearly 2000 earthquakes. A new approach to the definition of the coordinates of all of the events was applied to re-establish the hypocenters of the catalogued earthquakes. The obtained results indicate that the bulk of the earthquakes' foci in the region are located in the crust. However, some 2.5% of the foci are located at the depths ranging from 50 to 250 km. The new distribution of foci of earthquakes shows the concentration of foci in the form of two inclined branches, the center of which is located under the Yalto-Alushta seismic focal zone. The whole distribution of foci in depth corresponds to the relief of the lithosphere.

  7. Temporal distribution of earthquakes using renewal process in the Dasht-e-Bayaz region

    Science.gov (United States)

    Mousavi, Mehdi; Salehi, Masoud

    2018-01-01

    Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major ( M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.

  8. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  9. Statistical distributions of earthquakes and related non-linear features in seismic waves

    International Nuclear Information System (INIS)

    Apostol, B.-F.

    2006-01-01

    A few basic facts in the science of the earthquakes are briefly reviewed. An accumulation, or growth, model is put forward for the focal mechanisms and the critical focal zone of the earthquakes, which relates the earthquake average recurrence time to the released seismic energy. The temporal statistical distribution for average recurrence time is introduced for earthquakes, and, on this basis, the Omori-type distribution in energy is derived, as well as the distribution in magnitude, by making use of the semi-empirical Gutenberg-Richter law relating seismic energy to earthquake magnitude. On geometric grounds, the accumulation model suggests the value r = 1/3 for the Omori parameter in the power-law of energy distribution, which leads to β = 1,17 for the coefficient in the Gutenberg-Richter recurrence law, in fair agreement with the statistical analysis of the empirical data. Making use of this value, the empirical Bath's law is discussed for the average magnitude of the aftershocks (which is 1.2 less than the magnitude of the main seismic shock), by assuming that the aftershocks are relaxation events of the seismic zone. The time distribution of the earthquakes with a fixed average recurrence time is also derived, the earthquake occurrence prediction is discussed by means of the average recurrence time and the seismicity rate, and application of this discussion to the seismic region Vrancea, Romania, is outlined. Finally, a special effect of non-linear behaviour of the seismic waves is discussed, by describing an exact solution derived recently for the elastic waves equation with cubic anharmonicities, its relevance, and its connection to the approximate quasi-plane waves picture. The properties of the seismic activity accompanying a main seismic shock, both like foreshocks and aftershocks, are relegated to forthcoming publications. (author)

  10. Controls of earthquake faulting style on near field landslide triggering : the role of coseismic slip

    OpenAIRE

    Tatard, Lucile; Grasso, J. R.

    2013-01-01

    We compare the spatial distributions of seven databases of landslides triggered by M-w=5.6-7.9 earthquakes, using distances normalized by the earthquake fault length. We show that the normalized landslide distance distributions collapse, i.e., the normalized distance distributions overlap whatever the size of the earthquake, separately for the events associated with dip-slip, buried-faulting earthquakes, and surface-faulting earthquakes. The dip-slip earthquakes triggered landslides at larger...

  11. Self-organization of spatio-temporal earthquake clusters

    Directory of Open Access Journals (Sweden)

    S. Hainzl

    2000-01-01

    Full Text Available Cellular automaton versions of the Burridge-Knopoff model have been shown to reproduce the power law distribution of event sizes; that is, the Gutenberg-Richter law. However, they have failed to reproduce the occurrence of foreshock and aftershock sequences correlated with large earthquakes. We show that in the case of partial stress recovery due to transient creep occurring subsequently to earthquakes in the crust, such spring-block systems self-organize into a statistically stationary state characterized by a power law distribution of fracture sizes as well as by foreshocks and aftershocks accompanying large events. In particular, the increase of foreshock and the decrease of aftershock activity can be described by, aside from a prefactor, the same Omori law. The exponent of the Omori law depends on the relaxation time and on the spatial scale of transient creep. Further investigations concerning the number of aftershocks, the temporal variation of aftershock magnitudes, and the waiting time distribution support the conclusion that this model, even "more realistic" physics in missed, captures in some ways the origin of the size distribution as well as spatio-temporal clustering of earthquakes.

  12. New geological perspectives on earthquake recurrence models

    International Nuclear Information System (INIS)

    Schwartz, D.P.

    1997-01-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release

  13. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  14. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  15. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  16. ANALYSIS OF REGULARITIES IN DISTRIBUTION OF EARTHQUAKES BY FOCAL DISPLACEMENT IN THE KURIL-OKHOTSK REGION BEFORE THE CATASTROPHIC SIMUSHIR EARTHQUAKE OF 15 NOVEMBER 2006

    Directory of Open Access Journals (Sweden)

    Timofei K. Zlobin

    2012-01-01

    Full Text Available The catastrophic Simushir earthquake occurred on 15 November 2006 in the Kuril-Okhotsk region in the Middle Kuril Islands which is a transition zone between the Eurasian continent and the Pacific Ocean. It was followed by numerous strong earthquakes. It is established that the catastrophic earthquake was prepared on a site characterized by increased relative effective pressures which is located at the border of the low-pressure area (Figure 1.Based on data from GlobalCMT (Harvard, earthquake focal mechanisms were reconstructed, and tectonic stresses, the seismotectonic setting and the earthquakes distribution pattern were studied for analysis of the field of stresses in the region before to the Simushir earthquake (Figures 2 and 3; Table 1.Five areas of various types of movement were determined. Three of them are stretched along the Kuril Islands. It is established that seismodislocations in earthquake focal areas are regularly distributed. In each of the determined areas, displacements of a specific type (shear or reverse shear are concentrated and give evidence of the alteration and change of zones characterized by horizontal stretching and compression.The presence of the horizontal stretching and compression zones can be explained by a model of subduction (Figure 4. Detailed studies of the state of stresses of the Kuril region confirm such zones (Figure 5. Recent GeodynamicsThe established specific features of tectonic stresses before the catastrophic Simushir earthquake of 15 November 2006 contribute to studies of earthquake forecasting problems. The state of stresses and the geodynamic conditions suggesting occurrence of new earthquakes can be assessed from the data on the distribution of horizontal compression, stretching and shear areas of the Earth’s crust and the upper mantle in the Kuril region.

  17. Rupture distribution of the 1977 western Argentina earthquake

    Science.gov (United States)

    Langer, C.J.; Hartzell, S.

    1996-01-01

    Teleseismic P and SH body waves are used in a finite-fault, waveform inversion for the rupture history of the 23 November 1977 western Argentina earthquake. This double event consists of a smaller foreshock (M0 = 5.3 ?? 1026 dyn-cm) followed about 20 s later by a larger main shock (M0 = 1.5 ?? 1027 dyn-cm). Our analysis indicates that these two events occurred on different fault segments: with the foreshock having a strike, dip, and average rake of 345??, 45??E, and 50??, and the main shock 10??, 45??E, and 80??, respectively. The foreshock initiated at a depth of 17 km and propagated updip and to the north. The main shock initiated at the southern end of the foreshock zone at a depth of 25 to 30 km, and propagated updip and unilaterally to the south. The north-south separation of the centroids of the moment release for the foreshock and main shock is about 60 km. The apparent triggering of the main shock by the foreshock is similar to other earthquakes that have involved the failure of multiple fault segments, such as the 1992 Landers, California, earthquake. Such occurrences argue against the use of individual, mapped, surface fault or fault-segment lengths in the determination of the size and frequency of future earthquakes.

  18. Grain size distributions and their effects on auto-acoustic compaction

    Science.gov (United States)

    Taylor, S.; Brodsky, E. E.

    2013-12-01

    dependent on the largest grain sizes present in the mixture. Establishing governing rules for how mixtures of grain sizes interact will aid our understanding of how the different fault gouge configurations and size distributions observed in natural systems affect shear behavior and earthquake stability on faults.

  19. Critical behavior in earthquake energy dissipation

    Science.gov (United States)

    Wanliss, James; Muñoz, Víctor; Pastén, Denisse; Toledo, Benjamín; Valdivia, Juan Alejandro

    2017-09-01

    We explore bursty multiscale energy dissipation from earthquakes flanked by latitudes 29° S and 35.5° S, and longitudes 69.501° W and 73.944° W (in the Chilean central zone). Our work compares the predictions of a theory of nonequilibrium phase transitions with nonstandard statistical signatures of earthquake complex scaling behaviors. For temporal scales less than 84 hours, time development of earthquake radiated energy activity follows an algebraic arrangement consistent with estimates from the theory of nonequilibrium phase transitions. There are no characteristic scales for probability distributions of sizes and lifetimes of the activity bursts in the scaling region. The power-law exponents describing the probability distributions suggest that the main energy dissipation takes place due to largest bursts of activity, such as major earthquakes, as opposed to smaller activations which contribute less significantly though they have greater relative occurrence. The results obtained provide statistical evidence that earthquake energy dissipation mechanisms are essentially "scale-free", displaying statistical and dynamical self-similarity. Our results provide some evidence that earthquake radiated energy and directed percolation belong to a similar universality class.

  20. Temporal Changes in Stress Drop, Frictional Strength, and Earthquake Size Distribution in the 2011 Yamagata-Fukushima, NE Japan, Earthquake Swarm, Caused by Fluid Migration

    Science.gov (United States)

    Yoshida, Keisuke; Saito, Tatsuhiko; Urata, Yumi; Asano, Youichi; Hasegawa, Akira

    2017-12-01

    In this study, we investigated temporal variations in stress drop and b-value in the earthquake swarm that occurred at the Yamagata-Fukushima border, NE Japan, after the 2011 Tohoku-Oki earthquake. In this swarm, frictional strengths were estimated to have changed with time due to fluid diffusion. We first estimated the source spectra for 1,800 earthquakes with 2.0 ≤ MJMA < 3.0, by correcting the site-amplification and attenuation effects determined using both S waves and coda waves. We then determined corner frequency assuming the omega-square model and estimated stress drop for 1,693 earthquakes. We found that the estimated stress drops tended to have values of 1-4 MPa and that stress drops significantly changed with time. In particular, the estimated stress drops were very small at the beginning, and increased with time for 50 days. Similar temporal changes were obtained for b-value; the b-value was very high (b 2) at the beginning, and decreased with time, becoming approximately constant (b 1) after 50 days. Patterns of temporal changes in stress drop and b-value were similar to the patterns for frictional strength and earthquake occurrence rate, suggesting that the change in frictional strength due to migrating fluid not only triggered the swarm activity but also affected earthquake and seismicity characteristics. The estimated high Q-1 value, as well as the hypocenter migration, supports the presence of fluid, and its role in the generation and physical characteristics of the swarm.

  1. Fractal analysis of the spatial distribution of earthquakes along the Hellenic Subduction Zone

    Science.gov (United States)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2014-05-01

    The Hellenic Subduction Zone (HSZ) is the most seismically active region in Europe. Many destructive earthquakes have taken place along the HSZ in the past. The evolution of such active regions is expressed through seismicity and is characterized by complex phenomenology. The understanding of the tectonic evolution process and the physical state of subducting regimes is crucial in earthquake prediction. In recent years, there is a growing interest concerning an approach to seismicity based on the science of complex systems (Papadakis et al., 2013; Vallianatos et al., 2012). In this study we calculate the fractal dimension of the spatial distribution of earthquakes along the HSZ and we aim to understand the significance of the obtained values to the tectonic and geodynamic evolution of this area. We use the external seismic sources provided by Papaioannou and Papazachos (2000) to create a dataset regarding the subduction zone. According to the aforementioned authors, we define five seismic zones. Then, we structure an earthquake dataset which is based on the updated and extended earthquake catalogue for Greece and the adjacent areas by Makropoulos et al. (2012), covering the period 1976-2009. The fractal dimension of the spatial distribution of earthquakes is calculated for each seismic zone and for the HSZ as a unified system using the box-counting method (Turcotte, 1997; Robertson et al., 1995; Caneva and Smirnov, 2004). Moreover, the variation of the fractal dimension is demonstrated in different time windows. These spatiotemporal variations could be used as an additional index to inform us about the physical state of each seismic zone. As a precursor in earthquake forecasting, the use of the fractal dimension appears to be a very interesting future work. Acknowledgements Giorgos Papadakis wish to acknowledge the Greek State Scholarships Foundation (IKY). References Caneva, A., Smirnov, V., 2004. Using the fractal dimension of earthquake distributions and the

  2. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com [Karadeniz Technical University, Trabzon (Turkey); Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr [Ağrı İbrahim Çeçen University, Ağrı (Turkey)

    2016-04-18

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  3. Application of the extreme value approaches to the apparent magnitude distribution of the earthquakes

    Science.gov (United States)

    Tinti, S.; Mulargia, F.

    1985-03-01

    The apparent magnitude of an earthquake y is defined as the observed magnitude value and differs from the true magnitude m because of the experimental noise n. If f(m) is the density distribution of the magnitude m, and if g(n) is the density distribution of the error n, then the density distribution of y is simply computed by convolving f and g, i.e. h(y)=f*g. If the distinction between y and m is not realized, any statistical analysis based on the frequency-magnitude relation of the earthquake is bound to produce questionable results. In this paper we investigate the impact of the apparent magnitude idea on the statistical methods that study the earthquake distribution by taking into account only the largest (or extremal) earthquakes. We use two approaches: the Gumbel method based on Gumbel theory ( Gumbel, 1958), and the Poisson method introduced by Epstein and Lomnitz (1966). Both methods are concerned with the asymptotic properties of the magnitude distributions. Therefore, we study and compare the asymptotic behaviour of the distributions h(y) and f(m) under suitable hypotheses on the nature of the experimental noise. We investigate in detail two dinstinct cases: first, the two-side limited symmetrical noise, i.e. the noise that is bound to assume values inside a limited region, and second, the normal noise, i.e. the noise that is distributed according to a normal symmetric distribution. We further show that disregarding the noise generally leads to biased results and that, in the framework of the apparent magnitude, the Poisson approach preserves its usefulness, while the Gumbel method gives rise to a curious paradox.

  4. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    Science.gov (United States)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  5. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  6. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    Science.gov (United States)

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  7. Exploring Unintended Social Side Effects of Tent Distribution Practices in Post-Earthquake Haiti

    Directory of Open Access Journals (Sweden)

    Carmen Helen Logie

    2013-09-01

    Full Text Available The January 2010 earthquake devastated Haiti’s social, economic and health infrastructure, leaving 2 million persons—one-fifth of Haiti’s population—homeless. Internally displaced persons relocated to camps, where human rights remain compromised due to increased poverty, reduced security, and limited access to sanitation and clean water. This article draws on findings from 3 focus groups conducted with internally displaced young women and 3 focus groups with internally displaced young men (aged 18–24 in Leogane, Haiti to explore post-earthquake tent distribution practices. Focus group findings highlighted that community members were not engaged in developing tent distribution strategies. Practices that distributed tents to both children and parents, and linked food and tent distribution, inadvertently contributed to “chaos”, vulnerability to violence and family network breakdown. Moving forward we recommend tent distribution strategies in disaster contexts engage with community members, separate food and tent distribution, and support agency and strategies of self-protection among displaced persons.

  8. How fault geometry controls earthquake magnitude

    Science.gov (United States)

    Bletery, Q.; Thomas, A.; Karlstrom, L.; Rempel, A. W.; Sladen, A.; De Barros, L.

    2016-12-01

    Recent large megathrust earthquakes, such as the Mw9.3 Sumatra-Andaman earthquake in 2004 and the Mw9.0 Tohoku-Oki earthquake in 2011, astonished the scientific community. The first event occurred in a relatively low-convergence-rate subduction zone where events of its size were unexpected. The second event involved 60 m of shallow slip in a region thought to be aseismicaly creeping and hence incapable of hosting very large magnitude earthquakes. These earthquakes highlight gaps in our understanding of mega-earthquake rupture processes and the factors controlling their global distribution. Here we show that gradients in dip angle exert a primary control on mega-earthquake occurrence. We calculate the curvature along the major subduction zones of the world and show that past mega-earthquakes occurred on flat (low-curvature) interfaces. A simplified analytic model demonstrates that shear strength heterogeneity increases with curvature. Stress loading on flat megathrusts is more homogeneous and hence more likely to be released simultaneously over large areas than on highly-curved faults. Therefore, the absence of asperities on large faults might counter-intuitively be a source of higher hazard.

  9. Mega-city and great earthquake distributions: the search of basic links.

    Science.gov (United States)

    Levin, Boris; Sasorova, Elena; Domanski, Andrej

    2013-04-01

    The ever-increasing population density in large metropolitan cities near major active faults (e.g. Tokyo, Lisbon, San-Francisco, et al.) and recent catastrophic earthquakes in Japan, Indonesia and Haiti (loss of life more 500000), highlight the need for searching of causal relationships between distributions of earthquake epicenters and mega-cities at the Earth [1]. The latitudinal distribution of mega-cities calculated with using Internet data base, discovers a curious peculiarity: the density of large city numbers, related to 10-degree latitude interval, demonstrates two maximums in middle latitudes (±30-40°) on both sides of the equator. These maximums are separated by clean local minimum near equator, and such objects (mega-cities) are practically absent in the high latitudes. In the last two decades, it was shown [2, 3, 4] that a seismic activity of the Earth is described by the similar bimodal latitudinal distribution. The similarity between bimodal distributions for geophysical phenomena and mega-city locations attracts common attention. The peak values in the both distributions (near ±35°) correspond to location of well-known "critical latitudes" at the planet. These latitudes were determined [5], as the lines of intersection of a sphere and a spheroid of equal volume (±35°15'52″). Increasing of the angular velocity of a celestial body rotation leads to growth of oblateness of planet, and vice versa, the oblateness is decreasing with reducing of velocity of rotation. So, well-known effect of the Earth rotation instability leads to small pulsations of the geoid. In the critical latitudes, the geoid radius-vector is equal to the radius of sphere. The zones of near critical latitudes are characterized by high density of faults in the Earth crust and manifestation of some geological peculiarities (hot spot distribution, large ore deposit distribution, et al.). The active faults existence has led to an emanation of depth fluids, which created the good

  10. Distribution of large-earthquake input energy in viscous damped outrigger structures

    NARCIS (Netherlands)

    Morales Beltran, M.G.; Turan, Gursoy; Yildirim, Umut

    2017-01-01

    This article provides an analytical framework to assess the distribution of seismic energy in outrigger structures equipped with viscous dampers. The principle of damped outriggers for seismic control applications lies on the assumption that the total earthquake energy will be absorbed by the

  11. The Kresna earthquake of 1904 in Bulgaria

    Energy Technology Data Exchange (ETDEWEB)

    Ambraseys, N. [Imperial College of Science, London (United Kingdom). Technology and Medicine, Dept. of Civil Engineering

    2001-02-01

    The Kresna earthquake in 1904 in Bulgaria is one of the largest shallow 20th century events on land in the Balkans. This event, which was preceded by a large foreshock, has hitherto been assigned a range of magnitudes up to M{sub s} = 7.8 but the reappraisal of instrumental data yields as much smaller value of M{sub s} = 7.2 and a re-assesment of the intensity distribution suggests 7.1. Thus both instrumental and macroseismic data appear consistent with a magnitude which is also compatible with the fault segmentation and local morphology of the region which cannot accommodate shallow events much larger than about 7.0. The relatively large size of the main shock suggests surface faulting but the available field evidence is insufficient to establish the dimensions, attitude and amount of dislocation, except perhaps in the vicinity of Krupnik. This down sizing of the Kresna earthquake has important consequences for tectonics and earthquake hazard estimates in the Balkans.

  12. Spatiotemporal distribution of Oklahoma earthquakes: Exploring relationships using a nearest-neighbor approach

    Science.gov (United States)

    Vasylkivska, Veronika S.; Huerta, Nicolas J.

    2017-07-01

    Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.

  13. Gravity distribution characteristics and their relationship with the distribution of earthquakes and tectonic units in the North–South seismic belt, China

    Directory of Open Access Journals (Sweden)

    Guiju Wu

    2015-05-01

    Full Text Available The North–South Seismic Belt (NSSB is a Chinese tectonic boundary with a very complex structure, showing a sharp change in several geophysical field characteristics. To study these characteristics and their relationship with the distribution of earthquakes and faults in the study area, we first analyze the spatial gravity anomaly to achieve the Bouguer gravity anomaly (EGM2008 BGA and the regional gravity survey Bouguer gravity anomaly. Next, we ascertain the Moho depth and crustal thickness of the study area using interface inversion with the control points derived from the seismic and magnetotelluric sounding profiles achieved in recent years. In this paper, we summarize the relief, trend, Moho gradient, and crustal nature, in addition to their relationship with the distribution of earthquakes and faults in the study area. The findings show that earthquakes with magnitudes greater than Ms7.0 are mainly distributed in the Moho Bouguer anomaly variation belt and faults. The results of the study are important for future research on tectonic characteristics, geological and geophysical surveys, and seismicity patterns.

  14. Estimation of Slip Distribution of the 2007 Bengkulu Earthquake from GPS Observation Using Least Squares Inversion Method

    Directory of Open Access Journals (Sweden)

    Moehammad Awaluddin

    2012-07-01

    Full Text Available Continuous Global Positioning System (GPS observations showed significant crustal displacements as a result of the Bengkulu earthquake occurring on September 12, 2007. A maximum horizontal displacement of 2.11 m was observed at PRKB station, while the vertical component at BSAT station was uplifted with a maximum of 0.73 m, and the vertical component at LAIS station was subsided by -0.97 m. The method of adding more constraint on the inversion for the Bengkulu earthquake slip distribution from GPS observations can help solve a least squares inversion with an under-determined condition. Checkerboard tests were performed to help conduct the weighting for constraining the inversion. The inversion calculation of the Bengkulu earthquake slip distribution yielded in an optimum value of slip distribution by giving a weight of smoothing constraint of 0.001 and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the optimal inversion calculation was 5.12 m at the lower area of PRKB and BSAT stations. The seismic moment calculated from the optimal slip distribution was 7.14 x 1021 Nm, which is equivalent to a magnitude of 8.5.

  15. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  16. Limitation of the Predominant-Period Estimator for Earthquake Early Warning and the Initial Rupture of Earthquakes

    Science.gov (United States)

    Yamada, T.; Ide, S.

    2007-12-01

    Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.

  17. A model of seismic focus and related statistical distributions of earthquakes

    International Nuclear Information System (INIS)

    Apostol, Bogdan-Felix

    2006-01-01

    A growth model for accumulating seismic energy in a localized seismic focus is described, which introduces a fractional parameter r on geometrical grounds. The model is employed for deriving a power-type law for the statistical distribution in energy, where the parameter r contributes to the exponent, as well as corresponding time and magnitude distributions for earthquakes. The accompanying seismic activity of foreshocks and aftershocks is discussed in connection with this approach, as based on Omori distributions, and the rate of released energy is derived

  18. The Impact of The Energy-time Distribution of The Ms 7.0 Lushan Earthquake on Slope Dynamic Reliability

    Science.gov (United States)

    Liu, X.; Griffiths, D.; Tang, H.

    2013-12-01

    This paper introduces a new method to evaluate the area-specific potential risk for earthquake induced slope failures, and the Lushan earthquake is used as an example. The overall framework of this paper consists of three parts. First, the energy-time distribution of the earthquake was analyzed. The Ms 7.0 Lushan earthquake occurred on April 20, 2013. The epicenter was located in Lushan County, Sichuan province, which is in the same province heavily impacted by the 2008 Ms 8.0 Wenchuan earthquake. Compared with the Wenchuan earthquake, the records of the strong motion of the Lushan earthquake are much richer than those of the Wenchuan earthquake. Some earthquake observatories are very close to the epicenter and the closest strong motion record was collected with a spherical distance of just 34.8 km from the epicenter. This advantage stems from the fact that routine efforts of strong motion observation in this area were greatly enhanced after the Wenchuan earthquake. The energy-time distribution features of the Lushan earthquake waves were obtained from 123 groups of three-component acceleration records of the 40-second mainshock. When the 5% ~ 85% energy section is taken into account, the significant duration is presented with a start point of the first 3.0 to 4.0 seconds and the end point of the first 13.0 to 15.0 seconds. However, if the acceleration of 0.15g is taken into account, the bracketed duration is obtained with the start point of the first 4.0 to 5.0 seconds and the end point of the first 13.0 to 14.0 seconds. Second, a new reliability analysis method was proposed which considers the energy-time distribution of the earthquake. Using the significant duration and bracketed duration as certain statistical windows, the advantages of considering energy-time distribution can be involved. In this method, the dynamic critical slip surfaces and their factors of safety (FOS) are described as time series. The slope reliability evaluation criteria, such as dynamic

  19. Widespread ground motion distribution caused by rupture directivity during the 2015 Gorkha, Nepal earthquake

    Science.gov (United States)

    Koketsu, Kazuki; Miyake, Hiroe; Guo, Yujia; Kobayashi, Hiroaki; Masuda, Tetsu; Davuluri, Srinagesh; Bhattarai, Mukunda; Adhikari, Lok Bijaya; Sapkota, Soma Nath

    2016-06-01

    The ground motion and damage caused by the 2015 Gorkha, Nepal earthquake can be characterized by their widespread distributions to the east. Evidence from strong ground motions, regional acceleration duration, and teleseismic waveforms indicate that rupture directivity contributed significantly to these distributions. This phenomenon has been thought to occur only if a strike-slip or dip-slip rupture propagates to a site in the along-strike or updip direction, respectively. However, even though the earthquake was a dip-slip faulting event and its source fault strike was nearly eastward, evidence for rupture directivity is found in the eastward direction. Here, we explore the reasons for this apparent inconsistency by performing a joint source inversion of seismic and geodetic datasets, and conducting ground motion simulations. The results indicate that the earthquake occurred on the underthrusting Indian lithosphere, with a low dip angle, and that the fault rupture propagated in the along-strike direction at a velocity just slightly below the S-wave velocity. This low dip angle and fast rupture velocity produced rupture directivity in the along-strike direction, which caused widespread ground motion distribution and significant damage extending far eastwards, from central Nepal to Mount Everest.

  20. Earthquake Energy Distribution along the Earth Surface and Radius

    International Nuclear Information System (INIS)

    Varga, P.; Krumm, F.; Riguzzi, F.; Doglioni, C.; Suele, B.; Wang, K.; Panza, G.F.

    2010-07-01

    The global earthquake catalog of seismic events with M W ≥ 7.0, for the time interval from 1950 to 2007, shows that the depth distribution of earthquake energy release is not uniform. The 90% of the total earthquake energy budget is dissipated in the first ∼30km, whereas most of the residual budget is radiated at the lower boundary of the transition zone (410 km - 660 km), above the upper-lower mantle boundary. The upper border of the transition zone at around 410 km of depth is not marked by significant seismic energy release. This points for a non-dominant role of the slabs in the energy budged of plate tectonics. Earthquake number and energy release, although not well correlated, when analysed with respect to the latitude, show a decrease toward the polar areas. Moreover, the radiated energy has the highest peak close to (±5 o ) the so-called tectonic equator defined by Crespi et al. (2007), which is inclined about 30 o with respect to the geographic equator. At the same time the presence of a clear axial co- ordination of the radiated seismic energy is demonstrated with maxima at latitudes close to critical (±45 o ). This speaks about the presence of external forces that influence seismicity and it is consistent with the fact that Gutenberg-Richter law is linear, for events with M>5, only when the whole Earth's seismicity is considered. These data are consistent with an astronomical control on plate tectonics, i.e., the despinning (slowing of the Earth's angular rotation) of the Earth's rotation caused primarily by the tidal friction due to the Moon. The mutual position of the shallow and ∼660 km deep earthquake energy sources along subduction zones allows us to conclude that they are connected with the same slab along the W-directed subduction zones, but they may rather be disconnected along the opposed E-NE-directed slabs, being the deep seismicity controlled by other mechanisms. (author)

  1. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks

    International Nuclear Information System (INIS)

    Zhuang Jiancang; Ogata, Yosihiko

    2006-01-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata et al., Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method

  2. Properties of the probability distribution associated with the largest event in an earthquake cluster and their implications to foreshocks.

    Science.gov (United States)

    Zhuang, Jiancang; Ogata, Yosihiko

    2006-04-01

    The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.

  3. Underestimation of Microearthquake Size by the Magnitude Scale of the Japan Meteorological Agency: Influence on Earthquake Statistics

    Science.gov (United States)

    Uchide, Takahiko; Imanishi, Kazutoshi

    2018-01-01

    Magnitude scales based on the amplitude of seismic waves, including the Japan Meteorological Agency magnitude scale (Mj), are commonly used in routine processes. The moment magnitude scale (Mw), however, is more physics based and is able to evaluate any type and size of earthquake. This paper addresses the relation between Mj and Mw for microearthquakes. The relative moment magnitudes among earthquakes are well constrained by multiple spectral ratio analyses. The results for the events in the Fukushima Hamadori and northern Ibaraki prefecture areas of Japan imply that Mj is significantly and systematically smaller than Mw for microearthquakes. The Mj-Mw curve has slopes of 1/2 and 1 for small and large values of Mj, respectively; for example, Mj = 1.0 corresponds to Mw = 2.0. A simple numerical simulation implies that this is due to anelastic attenuation and the recording using a finite sampling interval. The underestimation affects earthquake statistics. The completeness magnitude, Mc, for magnitudes lower than which the magnitude-frequency distribution deviates from the Gutenberg-Richter law, is effectively lower for Mw than that for Mj, by taking into account the systematic difference between Mj and Mw. The b values of the Gutenberg-Richter law are larger for Mw than for Mj. As the b values for Mj and Mw are well correlated, qualitative argument using b values is not affected. While the estimated b values for Mj are below 1.5, those for Mw often exceed 1.5. This may affect the physical implication of the seismicity.

  4. Earthquake Recurrence and the Resolution Potential of Tectono‐Geomorphic Records

    KAUST Repository

    Zielke, Olaf

    2018-04-17

    A long‐standing debate in active tectonics addresses how slip is accumulated through space and time along a given fault or fault section. This debate is in part still ongoing because of the lack of sufficiently long instrumental data that may constrain the recurrence characteristics of surface‐rupturing earthquakes along individual faults. Geomorphic and stratigraphic records are used instead to constrain this behavior. Although geomorphic data frequently indicate slip accumulation via quasicharacteristic same‐size offset increments, stratigraphic data indicate that earthquake timing observes a quasirandom distribution. Assuming that both observations are valid within their respective frameworks, I want to address here which recurrence model is able to reproduce this seemingly contradictory behavior. I further want to address how aleatory offset variability and epistemic measurement uncertainty affect our ability to resolve single‐earthquake surface slip and along‐fault slip‐accumulation patterns. I use a statistical model that samples probability density functions (PDFs) for geomorphic marker formation (storm events), marker displacement (surface‐rupturing earthquakes), and offset measurement, generating tectono‐geomorphic catalogs to investigate which PDF combination consistently reproduces the above‐mentioned field observations. Doing so, I find that neither a purely characteristic earthquake (CE) nor a Gutenberg–Richter (GR) earthquake recurrence model is able to consistently reproduce those field observations. A combination of both however, with moderate‐size earthquakes following the GR model and large earthquakes following the CE model, is able to reproduce quasirandom earthquake recurrence times while simultaneously generating quasicharacteristic geomorphic offset increments. Along‐fault slip accumulation is dominated by, but not exclusively linked to, the occurrence of similar‐size large earthquakes. Further, the resolution

  5. Body size distribution of the dinosaurs.

    Directory of Open Access Journals (Sweden)

    Eoin J O'Gorman

    Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  6. Body size distribution of the dinosaurs.

    Science.gov (United States)

    O'Gorman, Eoin J; Hone, David W E

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  7. Body Size Distribution of the Dinosaurs

    Science.gov (United States)

    O’Gorman, Eoin J.; Hone, David W. E.

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size. PMID:23284818

  8. Body Size Distribution of the Dinosaurs

    OpenAIRE

    O?Gorman, Eoin J.; Hone, David W. E.

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutiona...

  9. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  10. Probabilistic model to forecast earthquakes in the Zemmouri (Algeria) seismoactive area on the basis of moment magnitude scale distribution functions

    Science.gov (United States)

    Baddari, Kamel; Makdeche, Said; Bellalem, Fouzi

    2013-02-01

    Based on the moment magnitude scale, a probabilistic model was developed to predict the occurrences of strong earthquakes in the seismoactive area of Zemmouri, Algeria. Firstly, the distributions of earthquake magnitudes M i were described using the distribution function F 0(m), which adjusts the magnitudes considered as independent random variables. Secondly, the obtained result, i.e., the distribution function F 0(m) of the variables M i was used to deduce the distribution functions G(x) and H(y) of the variables Y i = Log M 0,i and Z i = M 0,i , where (Y i)i and (Z i)i are independent. Thirdly, some forecast for moments of the future earthquakes in the studied area is given.

  11. Spatial Distribution of earthquakes off the coast of Fukushima Two Years after the M9 Earthquake: the Southern Area of the 2011 Tohoku Earthquake Rupture Zone

    Science.gov (United States)

    Yamada, T.; Nakahigashi, K.; Shinohara, M.; Mochizuki, K.; Shiobara, H.

    2014-12-01

    Huge earthquakes cause vastly stress field change around the rupture zones, and many aftershocks and other related geophysical phenomenon such as geodetic movements have been observed. It is important to figure out the time-spacious distribution during the relaxation process for understanding the giant earthquake cycle. In this study, we pick up the southern rupture area of the 2011 Tohoku earthquake (M9.0). The seismicity rate keeps still high compared with that before the 2011 earthquake. Many studies using ocean bottom seismometers (OBSs) have been doing since soon after the 2011 Tohoku earthquake in order to obtain aftershock activity precisely. Here we show one of the studies at off the coast of Fukushima which is located on the southern part of the rupture area caused by the 2011 Tohoku earthquake. We deployed 4 broadband type OBSs (BBOBSs) and 12 short-period type OBSs (SOBS) in August 2012. Other 4 BBOBSs attached with absolute pressure gauges and 20 SOBSs were added in November 2012. We recovered 36 OBSs including 8 BBOBSs in November 2013. We selected 1,000 events in the vicinity of the OBS network based on a hypocenter catalog published by the Japan Meteorological Agency, and extracted the data after time corrections caused by each internal clock. Each P and S wave arrival times, P wave polarity and maximum amplitude were picked manually on a computer display. We assumed one dimensional velocity structure based on the result from an active source experiment across our network, and applied time corrections every station for removing ambiguity of the assumed structure. Then we adopted a maximum-likelihood estimation technique and calculated the hypocenters. The results show that intensive activity near the Japan Trench can be seen, while there was a quiet seismic zone between the trench zone and landward high activity zone.

  12. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  13. Application of τc*Pd in earthquake early warning

    Science.gov (United States)

    Huang, Po-Lun; Lin, Ting-Li; Wu, Yih-Min

    2015-03-01

    Rapid assessment of damage potential and size of an earthquake at the station is highly demanded for onsite earthquake early warning. We study the application of τc*Pd for its estimation on the earthquake size using 123 events recorded by the borehole stations of KiK-net in Japan. The new type of earthquake size determined by τc*Pd is more related to the damage potential. We find that τc*Pd provides another parameter to measure the size of earthquake and the threshold to warn strong ground motion.

  14. Latitudinal distribution of earthquakes in the Andes and its peculiarity

    Directory of Open Access Journals (Sweden)

    B. W. Levin

    2009-12-01

    Full Text Available In the last decade, there has been growing interest in problems related to searching global spatiotemporal regularities in the distribution of seismic events on the Earth. The worldwide catalogs ISC were used for search of spatial and temporal distribution of earthquakes (EQ in the Pacific part of South America. We extracted all EQ from 1964 to 2004 with Mb>=4.0. The total number of events under study is near 30 000. The entire set of events was divided into six magnitude ranges (MR: 4.0<=Mb<4.5; 4.5<=Mb<5.0; 5.0<=Mb<5.5; 5.5<=Mb<6.0; 6.0<=Mb<6.5; and 6.5<=Mb. Further analysis was performed separately for each MR. The latitude distributions of the EQ number for all MR were studied. The whole region was divided in several latitudinal intervals (size of each interval was either 5° or 10°. The number of events in each latitudinal interval was normalized two times. After normalization we obtained the relative seismic event number generated per one kilometer of plate boundary. The maximum of seismic activity in the Pacific part of the South America is situated in latitude interval 20°–30° S. The comparative analysis was executed for the latitude distributions of the EQ number and the EQ energy released. Then the distributions of EQ hypocenter location in latitude and in depth were studied. The EQ sources for the high latitudes (up to 35° S are located on the depth (H between 20–80 km. It was shown, that full interval of depth in each latitudinal belt generally divides into three parts (clusters with close-cut separation boundaries (K1 – with 0K2 – with 120K3 – with H>=500 km.

  15. The 2005 Tarapaca, Chile, Intermediate-depth Earthquake: Evidence of Heterogeneous Fluid Distribution Across the Plate?

    Science.gov (United States)

    Kuge, K.; Kase, Y.; Urata, Y.; Campos, J.; Perez, A.

    2008-12-01

    The physical mechanism of intermediate-depth earthquakes remains unsolved, and dehydration embrittlement in subducting plates is a candidate. An earthquake of Mw7.8 occurred at a depth of 115 km beneath Tarapaca, Chile. In this study, we suggest that the earthquake rupture can be attributed to heterogeneous fluid distribution across the subducting plate. The distribution of aftershocks suggests that the earthquake occurred on the subhorizontal fault plane. By modeling regional waveforms, we determined the spatiotemporal distribution of moment release on the fault plane, testing a different suite of velocity models and hypocenters. Two patches of high slip were robustly obtained, although their geometry tends to vary. We tested the results separately by computing the synthetic teleseismic P and pP waveforms. Observed P waveforms are generally modeled, whereas two pulses of observed pP require that the two patches are in the WNW-ESE direction. From the selected moment-release evolution, the dynamic rupture model was constructed by means of Mikumo et al. (1998). The model shows two patches of high dynamic stress drop. Notable is a region of negative stress drop between the two patches. This was required so that the region could lack wave radiation but propagate rupture from the first to the second patches. We found from teleseismic P that the radiation efficiency of the earthquake is relatively small, which can support the existence of negative stress drop during the rupture. The heterogeneous distribution of stress drop that we found can be caused by fluid. The T-P condition of dehydration explains the locations of double seismic zones (e.g. Hacker et al., 2003). The distance between the two patches of high stress drop agrees with the distance between the upper and lower layers of the double seismic zone observed in the south (Rietbrock and Waldhauser, 2004). The two patches can be parts of the double seismic zone, indicating the existence of fluid from dehydration

  16. INITIAL PLANETESIMAL SIZES AND THE SIZE DISTRIBUTION OF SMALL KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Schlichting, Hilke E.; Fuentes, Cesar I.; Trilling, David E.

    2013-01-01

    The Kuiper Belt is a remnant from the early solar system and its size distribution contains many important constraints that can be used to test models of planet formation and collisional evolution. We show, by comparing observations with theoretical models, that the observed Kuiper Belt size distribution is well matched by coagulation models, which start with an initial planetesimal population with radii of about 1 km, and subsequent collisional evolution. We find that the observed size distribution above R ∼ 30 km is primordial, i.e., it has not been modified by collisional evolution over the age of the solar system, and that the size distribution below R ∼ 30 km has been modified by collisions and that its slope is well matched by collisional evolution models that use published strength laws. We investigate in detail the resulting size distribution of bodies ranging from 0.01 km to 30 km and find that its slope changes several times as a function of radius before approaching the expected value for an equilibrium collisional cascade of material strength dominated bodies for R ∼< 0.1 km. Compared to a single power-law size distribution that would span the whole range from 0.01 km to 30 km, we find in general a strong deficit of bodies around R ∼ 10 km and a strong excess of bodies around 2 km in radius. This deficit and excess of bodies are caused by the planetesimal size distribution left over from the runaway growth phase, which left most of the initial mass in small planetesimals while only a small fraction of the total mass is converted into large protoplanets. This excess mass in small planetesimals leaves a permanent signature in the size distribution of small bodies that is not erased after 4.5 Gyr of collisional evolution. Observations of the small Kuiper Belt Object (KBO) size distribution can therefore test if large KBOs grew as a result of runaway growth and constrained the initial planetesimal sizes. We find that results from recent KBO

  17. Unimodal tree size distributions possibly result from relatively strong conservatism in intermediate size classes.

    Directory of Open Access Journals (Sweden)

    Yue Bin

    Full Text Available Tree size distributions have long been of interest to ecologists and foresters because they reflect fundamental demographic processes. Previous studies have assumed that size distributions are often associated with population trends or with the degree of shade tolerance. We tested these associations for 31 tree species in a 20 ha plot in a Dinghushan south subtropical forest in China. These species varied widely in growth form and shade-tolerance. We used 2005 and 2010 census data from that plot. We found that 23 species had reversed J shaped size distributions, and eight species had unimodal size distributions in 2005. On average, modal species had lower recruitment rates than reversed J species, while showing no significant difference in mortality rates, per capita population growth rates or shade-tolerance. We compared the observed size distributions with the equilibrium distributions projected from observed size-dependent growth and mortality. We found that observed distributions generally had the same shape as predicted equilibrium distributions in both unimodal and reversed J species, but there were statistically significant, important quantitative differences between observed and projected equilibrium size distributions in most species, suggesting that these populations are not at equilibrium and that this forest is changing over time. Almost all modal species had U-shaped size-dependent mortality and/or growth functions, with turning points of both mortality and growth at intermediate size classes close to the peak in the size distribution. These results show that modal size distributions do not necessarily indicate either population decline or shade-intolerance. Instead, the modal species in our study were characterized by a life history strategy of relatively strong conservatism in an intermediate size class, leading to very low growth and mortality in that size class, and thus to a peak in the size distribution at intermediate sizes.

  18. Assessment of earthquake-induced landslides hazard in El Salvador after the 2001 earthquakes using macroseismic analysis

    Science.gov (United States)

    Esposito, Eliana; Violante, Crescenzo; Giunta, Giuseppe; Ángel Hernández, Miguel

    2016-04-01

    Two strong earthquakes and a number of smaller aftershocks struck El Salvador in the year 2001. The January 13 2001 earthquake, Mw 7.7, occurred along the Cocos plate, 40 km off El Salvador southern coast. It resulted in about 1300 deaths and widespread damage, mainly due to massive landsliding. Two of the largest earthquake-induced landslides, Las Barioleras and Las Colinas (about 2x105 m3) produced major damage to buildings and infrastructures and 500 fatalities. A neighborhood in Santa Tecla, west of San Salvador, was destroyed. The February 13 2001 earthquake, Mw 6.5, occurred 40 km east-southeast of San Salvador. This earthquake caused over 300 fatalities and triggered several landslides over an area of 2,500 km2 mostly in poorly consolidated volcaniclastic deposits. The La Leona landslide (5-7x105 m3) caused 12 fatalities and extensive damage to the Panamerican Highway. Two very large landslides of 1.5 km3 and 12 km3 produced hazardous barrier lakes at Rio El Desague and Rio Jiboa, respectively. More than 16.000 landslides occurred throughout the country after both quakes; most of them occurred in pyroclastic deposits, with a volume less than 1x103m3. The present work aims to define the relationship between the above described earthquake intensity, size and areal distribution of induced landslides, as well as to refine the earthquake intensity in sparsely populated zones by using landslide effects. Landslides triggered by the 2001 seismic sequences provided useful indication for a realistic seismic hazard assessment, providing a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides.

  19. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    Science.gov (United States)

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  20. Natural Time and Nowcasting Earthquakes: Are Large Global Earthquakes Temporally Clustered?

    Science.gov (United States)

    Luginbuhl, Molly; Rundle, John B.; Turcotte, Donald L.

    2018-02-01

    The objective of this paper is to analyze the temporal clustering of large global earthquakes with respect to natural time, or interevent count, as opposed to regular clock time. To do this, we use two techniques: (1) nowcasting, a new method of statistically classifying seismicity and seismic risk, and (2) time series analysis of interevent counts. We chose the sequences of M_{λ } ≥ 7.0 and M_{λ } ≥ 8.0 earthquakes from the global centroid moment tensor (CMT) catalog from 2004 to 2016 for analysis. A significant number of these earthquakes will be aftershocks of the largest events, but no satisfactory method of declustering the aftershocks in clock time is available. A major advantage of using natural time is that it eliminates the need for declustering aftershocks. The event count we utilize is the number of small earthquakes that occur between large earthquakes. The small earthquake magnitude is chosen to be as small as possible, such that the catalog is still complete based on the Gutenberg-Richter statistics. For the CMT catalog, starting in 2004, we found the completeness magnitude to be M_{σ } ≥ 5.1. For the nowcasting method, the cumulative probability distribution of these interevent counts is obtained. We quantify the distribution using the exponent, β, of the best fitting Weibull distribution; β = 1 for a random (exponential) distribution. We considered 197 earthquakes with M_{λ } ≥ 7.0 and found β = 0.83 ± 0.08. We considered 15 earthquakes with M_{λ } ≥ 8.0, but this number was considered too small to generate a meaningful distribution. For comparison, we generated synthetic catalogs of earthquakes that occur randomly with the Gutenberg-Richter frequency-magnitude statistics. We considered a synthetic catalog of 1.97 × 10^5 M_{λ } ≥ 7.0 earthquakes and found β = 0.99 ± 0.01. The random catalog converted to natural time was also random. We then generated 1.5 × 10^4 synthetic catalogs with 197 M_{λ } ≥ 7.0 in each catalog and

  1. Peak ground motion distribution in Romania due to Vrancea earthquakes

    International Nuclear Information System (INIS)

    Grecu, B.; Rizescu, M.; Radulian, M.; Mandrescu, N.; Moldovan, I.-A.; Bonjer, K.-P

    2002-01-01

    Vrancea is a particular seismic region situated at the SE-Carpathians bend (Romania). It is characterized by persistent seismicity in a concentrated focal volume, at depths of 60-200 km, with 2 to 3 major earthquakes per century (M W >7). The purpose of our study is to investigate in detail the ground motion patterns for small and moderate Vrancea events (M W = 3.5 to 5.3) occurred during 1999, taking advantage of the unique data set offered by the Calixto'99 Project and the permanent Vrancea-K2 network (150 stations). The observed patterns are compared with available macroseismic maps of large Vrancea earthquakes, showing similar general patterns elongated in the NE-SW direction which mimic the S-waves source radiation, but patches with pronounced maxima are also evidenced rather far from the epicenter, at the NE and SW edges of the Focsani sedimentary basin, as shown firstly by Atanasiu (1961). This feature is also visible on instrumental data of strong events (Mandrescu and Radulian, 1999) as well as for moderate events recently recorded by digital K2 network (Bonjer et al., 2001) and correlates with the distribution of predominant response frequencies of shallow sedimentary layers. The influence of the local structure and/or focussing effects, caused by deeper lithospheric structure, on the observed site effects and the implications on the seismic hazard assessment for Vrancea earthquakes are discussed. (authors)

  2. Characterizing Aftershock Sequences of the Recent Strong Earthquakes in Central Italy

    Science.gov (United States)

    Kossobokov, Vladimir G.; Nekrasova, Anastasia K.

    2017-10-01

    The recent strong earthquakes in Central Italy allow for a comparative analysis of their aftershocks from the viewpoint of the Unified Scaling Law for Earthquakes, USLE, which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. In particular, we consider aftershocks as a sequence of avalanches in self-organized system of blocks-and-faults of the Earth lithosphere, each aftershock series characterized with the distribution of the USLE control parameter, η. We found the existence, in a long-term, of different, intermittent levels of rather steady seismic activity characterized with a near constant value of η, which switch, in mid-term, at times of transition associated with catastrophic events. On such a transition, seismic activity may follow different scenarios with inter-event time scaling of different kind, including constant, logarithmic, power law, exponential rise/decay or a mixture of those as observed in the case of the ongoing one associated with the three strong earthquakes in 2016. Evidently, our results do not support the presence of universality of seismic energy release, while providing constraints on modelling seismic sequences for earthquake physicists and supplying decision makers with information for improving local seismic hazard assessments.

  3. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  4. The Kresna earthquake of 1904 in Bulgaria

    Directory of Open Access Journals (Sweden)

    N. N. Ambraseys

    2001-06-01

    Full Text Available The Kresna earthquake in 1904 in Bulgaria is one of the largest shallow 20th century events on land in the Balkans. This event, which was preceded by a large foreshock, has hitherto been assigned a range of magnitudes up to M S = 7.8 but the reappraisal of instrumental data yields a much smaller value of M S = 7.2 and a re-assement of the intensity distribution suggests 7.1. Thus both instrumental and macroseismic data appear consistent with a magnitude which is also compatible with the fault segmentation and local morphology of the region which cannot accommodate shallow events much larger than about 7.0. The relatively large size of the main shock suggests surface faulting but the available field evidence is insufficient to establish the dimensions, attitude andamount of dislocation, except perhaps in the vicinity of Krupnik. This downsizing of the Kresna earthquake has important consequences for tectonics and earthquake hazard estimates in the Balkans.

  5. Co- and postseismic slip distribution for the 2011 March 9 earthquake based on the geodetic data: Role on the initiation of the 2011 Tohoku earthquake

    Science.gov (United States)

    Ohta, Y.; Hino, R.; Inazu, D.; Ohzono, M.; Mishina, M.; Nakajima, J.; Ito, Y.; Sato, T.; Tamura, Y.; Fujimoto, H.; Tachibana, K.; Demachi, T.; Osada, Y.; Shinohara, M.; Miura, S.

    2012-04-01

    A large foreshock with M7.3 occurred on March 9, 2011 at the subducting Pacific plate interface followed by the M9.0 Tohoku earthquake 51 hours later. We propose a slip distribution of the foreshock deduced from dense inland GPS sites and Ocean Bottom Pressure gauge (OBP) sites. The multiple OBP gauges were installed before the M7.3 foreshock in and around the focal area. We succeed to collect the OBP gauge data in 9 sites, which included two cabled OBPs in off Kamaishi (TM1, TM2). The inland GPS horizontal coseismic displacements are estimated based on baseline analyses to show the broad area of displacement field up to ~30mm directing to the focal area. In contrast, there is no coherent signal in the vertical components. The several OBP sites, for example, P2 and P6 sites located the westward from the epicenter of the foreshock clearly detected the coseismic displacement. The estimated coseismic displacement reached more than 100mm in P6 sites. Intriguingly, GJT3 sites, which the most nearly OBP sites from the epicenter, did not show the significant displacement. Based on the inland GPS sites and OBPs data, we estimated a coseismic slip distribution in the subducting plate interface. The estimated slip distribution can explain observations including the vertical displacement obtained at the OBP sites. The amount of moment release is equivalent to Mw 7.2. The spatio-temporal aftershock distribution of the foreshock shows a southward migration from our estimated fault model. We suggest that aseismic slip occurred after the M7.3 earthquake. The onshore GPS data also supports the occurrence of the afterslip in the southwestward area of the coseismic fault. We estimated the sub-daily coordinates every three hours at the several coastal GPS sites to reveal the time evolutional sequences suggesting the postseismic deformation, especially in the horizontal components. We also examine volumetric strain data at Kinka-san Island, which is situated at the closest distance

  6. Earthquake Complex Network Analysis Before and After the Mw 8.2 Earthquake in Iquique, Chile

    Science.gov (United States)

    Pasten, D.

    2017-12-01

    The earthquake complex networks have shown that they are abble to find specific features in seismic data set. In space, this networkshave shown a scale-free behavior for the probability distribution of connectivity, in directed networks and theyhave shown a small-world behavior, for the undirected networks.In this work, we present an earthquake complex network analysis for the large earthquake Mw 8.2 in the north ofChile (near to Iquique) in April, 2014. An earthquake complex network is made dividing the three dimensional space intocubic cells, if one of this cells contain an hypocenter, we name this cell like a node. The connections between nodes aregenerated in time. We follow the time sequence of seismic events and we are making the connections betweennodes. Now, we have two different networks: a directed and an undirected network. Thedirected network takes in consideration the time-direction of the connections, that is very important for the connectivityof the network: we are considering the connectivity, ki of the i-th node, like the number of connections going out ofthe node i plus the self-connections (if two seismic events occurred successive in time in the same cubic cell, we havea self-connection). The undirected network is made removing the direction of the connections and the self-connectionsfrom the directed network. For undirected networks, we are considering only if two nodes are or not connected.We have built a directed complex network and an undirected complex network, before and after the large earthquake in Iquique. We have used magnitudes greater than Mw = 1.0 and Mw = 3.0. We found that this method can recognize the influence of thissmall seismic events in the behavior of the network and we found that the size of the cell used to build the network isanother important factor to recognize the influence of the large earthquake in this complex system. This method alsoshows a difference in the values of the critical exponent γ (for the probability

  7. A model of characteristic earthquakes and its implications for regional seismicity

    DEFF Research Database (Denmark)

    López-Ruiz, R.; Vázquez-Prada, M.; Pacheco, A.F.

    2004-01-01

    Regional seismicity (i.e. that averaged over large enough areas over long enough periods of time) has a size-frequency relationship, the Gutenberg-Richter law, which differs from that found for some seismic faults, the Characteristic Earthquake relationship. But all seismicity comes in the end from...... active faults, so the question arises of how one seismicity pattern could emerge from the other. The recently introduced Minimalist Model of Vázquez-Prada et al. of characteristic earthquakes provides a simple representation of the seismicity originating from a single fault. Here, we show...... that a Characteristic Earthquake relationship together with a fractal distribution of fault lengths can accurately describe the total seismicity produced in a region. The resulting earthquake catalogue accounts for the addition of both all the characteristic and all the non-characteristic events triggered in the faults...

  8. Characterizing spatial heterogeneity based on the b-value and fractal analyses of the 2015 Nepal earthquake sequence

    Science.gov (United States)

    Nampally, Subhadra; Padhy, Simanchal; Dimri, Vijay P.

    2018-01-01

    The nature of spatial distribution of heterogeneities in the source area of the 2015 Nepal earthquake is characterized based on the seismic b-value and fractal analysis of its aftershocks. The earthquake size distribution of aftershocks gives a b-value of 1.11 ± 0.08, possibly representing the highly heterogeneous and low stress state of the region. The aftershocks exhibit a fractal structure characterized by a spectrum of generalized dimensions, Dq varying from D2 = 1.66 to D22 = 0.11. The existence of a fractal structure suggests that the spatial distribution of aftershocks is not a random phenomenon, but it self-organizes into a critical state, exhibiting a scale-independent structure governed by a power-law scaling, where a small perturbation in stress is sufficient enough to trigger aftershocks. In order to obtain the bias in fractal dimensions resulting from finite data size, we compared the multifractal spectrum for the real data and random simulations. On comparison, we found that the lower limit of bias in D2 is 0.44. The similarity in their multifractal spectra suggests the lack of long-range correlation in the data, with an only weakly multifractal or a monofractal with a single correlation dimension D2 characterizing the data. The minimum number of events required for a multifractal process with an acceptable error is discussed. We also tested for a possible correlation between changes in D2 and energy released during the earthquakes. The values of D2 rise during the two largest earthquakes (M > 7.0) in the sequence. The b- and D2 values are related by D2 = 1.45 b that corresponds to the intermediate to large earthquakes. Our results provide useful constraints on the spatial distribution of b- and D2-values, which are useful for seismic hazard assessment in the aftershock area of a large earthquake.

  9. The Italian primary school-size distribution and the city-size: a complex nexus

    Science.gov (United States)

    Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.

    2014-06-01

    We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.

  10. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  11. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  12. Change of particle size distribution during Brownian coagulation

    International Nuclear Information System (INIS)

    Lee, K.W.

    1984-01-01

    Change in particle size distribution due to Brownian coagulation in the continuum regime has been stuied analytically. A simple analytic solution for the size distribution of an initially lognormal distribution is obtained based on the assumption that the size distribution during the coagulation process attains or can, at least, be represented by a time dependent lognormal function. The results are found to be in a form that corrects Smoluchowski's solution for both polydispersity and size-dependent kernel. It is further shown that regardless of whether the initial distribution is narrow or broad, the spread of the distribution is characterized by approaching a fixed value of the geometric standard deviation. This result has been compared with the self-preserving distribution obtained by similarity theory. (Author)

  13. The size distributions of all Indian cities

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen; Danforth, Diana

    2017-05-01

    We apply five distributions-lognormal, double-Pareto lognormal, lognormal-upper tail Pareto, Pareto tails-lognormal, and Pareto tails-lognormal with differentiability restrictions-to estimate the size distribution of all Indian cities. Since India contains numerous small cities, it is important to explicitly model the lower-tail behavior for studying the distribution of all Indian cities. Our results rigorously confirm, using both graphical and formal statistical tests, that among these five distributions, Pareto tails-lognormal is a better suited parametrization of the Indian city size data, verifying that the Indian city size distribution exhibits a strong reverse Pareto in the lower tail, lognormal in the mid-range body, and Pareto in the upper tail.

  14. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    Science.gov (United States)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  15. Liquefaction induced by modern earthquakes as a key to paleoseismicity: A case study of the 1988 Saguenay event

    International Nuclear Information System (INIS)

    Tuttle, M.; Cowie, P.; Wolf, L.

    1992-01-01

    Liquefaction features, including sand dikes, sills, and sand-filled craters, that formed at different distances from the epicenter of the 1988 (Mw 5.9) Saguenay earthquake are compared with one another and with older features. Modern liquefaction features decrease in size with increasing distance from the Saguenay epicenter. This relationship suggests that the size of liquefaction features may be used to determine source zones of past earthquakes and to estimate attenuation of seismic energy. Pre-1988 liquefaction features are cross-cut by the 1988 features. Although similar in morphology to the modern features, the pre-1988 features are more weathered and considerably larger in size. The larger pre-1988 features are located in the Ferland area, whereas the smallest pre-1988 feature occurs more than 37 km to the southwest. This spatial distribution of different size features suggests that an unidentified earthquake source zone (in addition to the one that generated the Saguenay earthquake) may exist in the Laurentide-Saguenay region. Structural relationships of the liquefaction features indicate that one, possibly two, earthquakes induced liquefaction in the region prior to 1988. The age of only one pre-1988 feature is well-constrained at 340 ± 70 radiocarbon years BP. If the 1663 earthquake was responsible for the formation of this feature, this event may have been centered in the Laurentide-Saguenay region rather than in the Charlevoix seismic zone

  16. On the Size Distribution of Sand

    DEFF Research Database (Denmark)

    Sørensen, Michael

    2016-01-01

    A model is presented of the development of the size distribution of sand while it is transported from a source to a deposit. The model provides a possible explanation of the log-hyperbolic shape that is frequently found in unimodal grain size distributions in natural sand deposits, as pointed out......-distribution, by taking into account that individual grains do not have the same travel time from the source to the deposit. The travel time is assumed to be random so that the wear on the individual grains vary randomly. The model provides an interpretation of the parameters of the NIG-distribution, and relates the mean...

  17. Synchronization and desynchronization in the Olami-Feder-Christensen earthquake model and potential implications for real seismicity

    Directory of Open Access Journals (Sweden)

    S. Hergarten

    2011-09-01

    Full Text Available The Olami-Feder-Christensen model is probably the most studied model in the context of self-organized criticality and reproduces several statistical properties of real earthquakes. We investigate and explain synchronization and desynchronization of earthquakes in this model in the nonconservative regime and its relevance for the power-law distribution of the event sizes (Gutenberg-Richter law and for temporal clustering of earthquakes. The power-law distribution emerges from synchronization, and its scaling exponent can be derived as τ = 1.775 from the scaling properties of the rupture areas' perimeter. In contrast, the occurrence of foreshocks and aftershocks according to Omori's law is closely related to desynchronization. This mechanism of foreshock and aftershock generation differs strongly from the widespread idea of spontaneous triggering and gives an idea why some even large earthquakes are not preceded by any foreshocks in nature.

  18. Rupture geometry and slip distribution of the 2016 January 21st Ms6.4 Menyuan, China earthquake

    Science.gov (United States)

    Zhou, Y.

    2017-12-01

    On 21 January 2016, an Ms6.4 earthquake stroke Menyuan country, Qinghai Province, China. The epicenter of the main shock and locations of its aftershocks indicate that the Menyuan earthquake occurred near the left-lateral Lenglongling fault. However, the focal mechanism suggests that the earthquake should take place on a thrust fault. In addition, field investigation indicates that the earthquake did not rupture the ground surface. Therefore, the rupture geometry is unclear as well as coseismic slip distribution. We processed two pairs of InSAR images acquired by the ESA Sentinel-1A satellite with the ISCE software, and both ascending and descending orbits were included. After subsampling the coseismic InSAR images into about 800 pixels, coseismic displacement data along LOS direction are inverted for earthquake source parameters. We employ an improved mixed linear-nonlinear Bayesian inversion method to infer fault geometric parameters, slip distribution, and the Laplacian smoothing factor simultaneously. This method incorporates a hybrid differential evolution algorithm, which is an efficient global optimization algorithm. The inversion results show that the Menyuan earthquake ruptured a blind thrust fault with a strike of 124°and a dip angle of 41°. This blind fault was never investigated before and intersects with the left-lateral Lenglongling fault, but the strikes of them are nearly parallel. The slip sense is almost pure thrusting, and there is no significant slip within 4km depth. The max slip value is up to 0.3m, and the estimated moment magnitude is Mw5.93, in agreement with the seismic inversion result. The standard error of residuals between InSAR data and model prediction is as small as 0.5cm, verifying the correctness of the inversion results.

  19. Where was the 1898 Mare Island Earthquake? Insights from the 2014 South Napa Earthquake

    Science.gov (United States)

    Hough, S. E.

    2014-12-01

    The 2014 South Napa earthquake provides an opportunity to reconsider the Mare Island earthquake of 31 March 1898, which caused severe damage to buildings at a Navy yard on the island. Revising archival accounts of the 1898 earthquake, I estimate a lower intensity magnitude, 5.8, than the value in the current Uniform California Earthquake Rupture Forecast (UCERF) catalog (6.4). However, I note that intensity magnitude can differ from Mw by upwards of half a unit depending on stress drop, which for a historical earthquake is unknowable. In the aftermath of the 2014 earthquake, there has been speculation that apparently severe effects on Mare Island in 1898 were due to the vulnerability of local structures. No surface rupture has ever been identified from the 1898 event, which is commonly associated with the Hayward-Rodgers Creek fault system, some 10 km west of Mare Island (e.g., Parsons et al., 2003). Reconsideration of detailed archival accounts of the 1898 earthquake, together with a comparison of the intensity distributions for the two earthquakes, points to genuinely severe, likely near-field ground motions on Mare Island. The 2014 earthquake did cause significant damage to older brick buildings on Mare Island, but the level of damage does not match the severity of documented damage in 1898. The high intensity files for the two earthquakes are more over spatially shifted, with the centroid of the 2014 distribution near the town of Napa and that of the 1898 distribution near Mare Island, east of the Hayward-Rodgers Creek system. I conclude that the 1898 Mare Island earthquake was centered on or near Mare Island, possibly involving rupture of one or both strands of the Franklin fault, a low-slip-rate fault sub-parallel to the Rodgers Creek fault to the west and the West Napa fault to the east. I estimate Mw5.8 assuming an average stress drop; data are also consistent with Mw6.4 if stress drop was a factor of ≈3 lower than average for California earthquakes. I

  20. Tsunamigenic earthquake simulations using experimentally derived friction laws

    Science.gov (United States)

    Murphy, S.; Di Toro, G.; Romano, F.; Scala, A.; Lorito, S.; Spagnuolo, E.; Aretusini, S.; Festa, G.; Piatanesi, A.; Nielsen, S.

    2018-03-01

    Seismological, tsunami and geodetic observations have shown that subduction zones are complex systems where the properties of earthquake rupture vary with depth as a result of different pre-stress and frictional conditions. A wealth of earthquakes of different sizes and different source features (e.g. rupture duration) can be generated in subduction zones, including tsunami earthquakes, some of which can produce extreme tsunamigenic events. Here, we offer a geological perspective principally accounting for depth-dependent frictional conditions, while adopting a simplified distribution of on-fault tectonic pre-stress. We combine a lithology-controlled, depth-dependent experimental friction law with 2D elastodynamic rupture simulations for a Tohoku-like subduction zone cross-section. Subduction zone fault rocks are dominantly incohesive and clay-rich near the surface, transitioning to cohesive and more crystalline at depth. By randomly shifting along fault dip the location of the high shear stress regions ("asperities"), moderate to great thrust earthquakes and tsunami earthquakes are produced that are quite consistent with seismological, geodetic, and tsunami observations. As an effect of depth-dependent friction in our model, slip is confined to the high stress asperity at depth; near the surface rupture is impeded by the rock-clay transition constraining slip to the clay-rich layer. However, when the high stress asperity is located in the clay-to-crystalline rock transition, great thrust earthquakes can be generated similar to the Mw 9 Tohoku (2011) earthquake.

  1. EFFECTS OF EFFECTS OF PARTICLE SIZE DISTRIBUTION ...

    African Journals Online (AJOL)

    eobe

    The parameters examined were: moisture content, particle size distribution, total isture content, particle size distribution, total hydrocarbon content, soil pH, available nitrogen, available phosphorus, total heterotrophic bacteria and fungi count. The analysis of the soil characteristics throughout the remediation period showed ...

  2. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  3. Regional distribution of released earthquake energy in northern Egypt along with Inahass area

    International Nuclear Information System (INIS)

    El-hemamy, S.T.; Adel, A.A. Othman

    1999-01-01

    A review of the seismic history of Egypt indicates sone areas of high activity concentrated along Oligocene-Miocene faults. These areas support the idea of recent activation of the Oligocene-Miocene stress cycle. There are similarities in the special distribution of recent and historical epicenters. Form the tectonic map of Egypt, distribution of Intensity and magnitude show strong activity along Nile Delta. This due to the presence of a thick layers of recent alluvial sediments. The released energy of the earthquakes are effective on the structures. The present study deals with the computed released energies of the reported earthquakes in Egypt and around Inshas area . Its effect on the urban and nuclear facilities inside Inshas site is considered. Special consideration will be given to old and new waste repository sites. The application of the determined released energy reveals that Inshas site is affected by seismic activity from five seismo-tectonic source zones, namely the Red Sea, Nile Delta, El-Faiyum, the Mediterranean Sea and the Gulf of Aqaba seismo-tectonic zones. El-Faiyum seismo-tectonic source zone has the maximum effect on the site and gave a high released energy reaching to 5.4E +2 1 erg

  4. The maximum earthquake in future T years: Checking by a real catalog

    International Nuclear Information System (INIS)

    Pisarenko, V.F.; Rodkin, M.V.

    2015-01-01

    The studies of disaster statistics are being largely carried out in recent decades. Some recent achievements in the field can be found in Pisarenko and Rodkin (2010). An important aspect in the seismic risk assessment is the using historical earthquake catalogs and the combining historical data with instrumental ones since historical catalogs cover very long time periods and can improve seismic statistics in the higher magnitude domain considerably. We suggest the new statistical technique for this purpose and apply it to two historical Japan catalogs and the instrumental JMA catalog. The main focus of these approaches is on the occurrence of disasters of extreme sizes as the most important ones from practical point of view. Our method of statistical analysis of the size distribution in the uppermost range of extremely rare events was suggested, based on maximum size M max (τ) (e.g. earthquake energy, ground acceleration caused by earthquake, victims and economic losses from natural catastrophes, etc.) that will occur in a prescribed time interval τ. A new approach to the problem discrete data that we called “the magnitude spreading” is suggested. This method reduces discrete random value to continuous ones by addition a small uniformly distributed random components. We analyze this method in details and apply it to verification of parameters derived from two historical catalogs: the Usami earthquake catalog (599–1884) and the Utsu catalog (1885–1925). We compare their parameters with ones derived from the instrumental JMA catalog (1926–2014). The results of this verification are following: The Usami catalog is incompatible with the instrumental one, whereas parameters estimated from the Utsu catalog are statistically compatible in the higher magnitude domain with sample of M max (τ) derived from the JMA catalog

  5. Evaluation of droplet size distributions using univariate and multivariate approaches.

    Science.gov (United States)

    Gaunø, Mette Høg; Larsen, Crilles Casper; Vilhelmsen, Thomas; Møller-Sonnergaard, Jørn; Wittendorff, Jørgen; Rantanen, Jukka

    2013-01-01

    Pharmaceutically relevant material characteristics are often analyzed based on univariate descriptors instead of utilizing the whole information available in the full distribution. One example is droplet size distribution, which is often described by the median droplet size and the width of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution. Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions. The multivariate data analysis was proven to be an efficient tool for evaluating the full information contained in a distribution.

  6. Tidal influence through LOD variations on the temporal distribution of earthquake occurrences

    Science.gov (United States)

    Varga, P.; Gambis, D.; Bizouard, Ch.; Bus, Z.; Kiszely, M.

    2006-10-01

    Stresses generated by the body tides are very small at the depth of crustal earth- quakes (~10^2 N/m2). The maximum value of the lunisolar stress within the depth range of earthquakes is 10^3 N/m2 (at depth of about 600 km). Surface loads, due to oceanic tides, in coastal areas are ~ 104 N/m2. These influences are however too small to affect the outbreak time of seismic events. Authors show the effect on time distribution of seismic activity due to ΔLOD generated by zonal tides for the case of Mf, Mm, Ssa and Sa tidal constituents can be much more effective to trigger earthquakes. According to this approach we show that the tides are not directly triggering the seismic events but through the generated length of day variations. That is the reason why in case of zonal tides a correlation of the lunisolar effect and seismic activity exists, what is not the case for the tesseral and sectorial tides.

  7. The Macroseismic Intensity Distribution of the 30 October 2016 Earthquake in Central Italy (Mw 6.6): Seismotectonic Implications

    Science.gov (United States)

    Galli, Paolo; Castenetto, Sergio; Peronace, Edoardo

    2017-10-01

    The central Italy Apennines were rocket in 2016 by the strongest earthquakes of the past 35 years. Two main shocks (Mw 6.2 and Mw 6.6) between the end of August and October caused the death of almost 300 people, and the destruction of 50 villages and small towns scattered along 40 km in the hanging wall of the N165° striking Mount Vettore fault system, that is, the structure responsible for the earthquakes. The 24 August southern earthquake, besides causing all the casualties, razed to the ground the small medieval town of Amatrice and dozens of hamlets around it. The 30 October main shock crushed definitely all the villages of the whole epicentral area (up to 11 intensity degree), extending northward the level of destruction and inducing heavy damage even to the 30 km far Camerino town. The survey of the macroseismic effects started the same day of the first main shock and continued during the whole seismic sequence, even during and after the strong earthquakes at the end of October, allowing the definition of a detailed picture of the damage distribution, day by day. Here we present the results of the final survey in terms of Mercalli-Cancani-Sieberg intensity, which account for the cumulative effects of the whole 2016 sequence (465 intensity data points, besides 435 related to the 24 August and 54 to the 26 October events, respectively). The distribution of the highest intensity data points evidenced the lack of any possible overlap between the 2016 earthquakes and the strongest earthquakes of the region, making this sequence a unique case in the seismic history of Italy. In turn, the cross matching with published paleoseismic data provided some interesting insights concerning the seismogenic behavior of the Mount Vettore fault in comparison with other active normal faults of the region.

  8. Width of the Surface Rupture Zone for Thrust Earthquakes and Implications for Earthquake Fault Zoning: Chi-Chi 1999 and Wenchuan 2008 Earthquakes

    Science.gov (United States)

    Boncio, P.; Caldarella, M.

    2016-12-01

    We analyze the zones of coseismic surface faulting along thrust faults, whit the aim of defining the most appropriate criteria for zoning the Surface Fault Rupture Hazard (SFRH) along thrust faults. Normal and strike-slip faults were deeply studied in the past, while thrust faults were not studied with comparable attention. We analyze the 1999 Chi-Chi, Taiwan (Mw 7.6) and 2008 Wenchuan, China (Mw 7.9) earthquakes. Several different types of coseismic fault scarps characterize the two earthquakes, depending on the topography, fault geometry and near-surface materials. For both the earthquakes, we collected from the literature, or measured in GIS-georeferenced published maps, data about the Width of the coseismic Rupture Zone (WRZ). The frequency distribution of WRZ compared to the trace of the main fault shows that the surface ruptures occur mainly on and near the main fault. Ruptures located away from the main fault occur mainly in the hanging wall. Where structural complexities are present (e.g., sharp bends, step-overs), WRZ is wider then for simple fault traces. We also fitted the distribution of the WRZ dataset with probability density functions, in order to define a criterion to remove outliers (e.g., by selecting 90% or 95% probability) and define the zone where the probability of SFRH is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM) mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary. In the absence of such a very detailed study, during basic (First level) SM mapping, a width of 350-400 m seems to be recommended (95% of probability). If the fault is carefully mapped (higher level SM), one must consider that the highest SFRH is concentrated in a narrow zone, 50 m-wide, that should be considered as a "fault-avoidance (or setback) zone". These fault zones should be asymmetric. The ratio of footwall to hanging wall (FW:HW) calculated here ranges from 1:5 to 1:3.

  9. Romanian earthquakes analysis using BURAR seismic array

    International Nuclear Information System (INIS)

    Borleanu, Felix; Rogozea, Maria; Nica, Daniela; Popescu, Emilia; Popa, Mihaela; Radulian, Mircea

    2008-01-01

    Bucovina seismic array (BURAR) is a medium-aperture array, installed in 2002 in the northern part of Romania (47.61480 N latitude, 25.21680 E longitude, 1150 m altitude), as a result of the cooperation between Air Force Technical Applications Center, USA and National Institute for Earth Physics, Romania. The array consists of ten elements, located in boreholes and distributed over a 5 x 5 km 2 area; nine with short-period vertical sensors and one with a broadband three-component sensor. Since the new station has been operating the earthquake survey of Romania's territory has been significantly improved. Data recorded by BURAR during 01.01.2005 - 12.31.2005 time interval are first processed and analyzed, in order to establish the array detection capability of the local earthquakes, occurred in different Romanian seismic zones. Subsequently a spectral ratios technique was applied in order to determine the calibration relationships for magnitude, using only the information gathered by BURAR station. The spectral ratios are computed relatively to a reference event, considered as representative for each seismic zone. This method has the advantage to eliminate the path effects. The new calibration procedure is tested for the case of Vrancea intermediate-depth earthquakes and proved to be very efficient in constraining the size of these earthquakes. (authors)

  10. The earthquakes of the Baltic shield

    International Nuclear Information System (INIS)

    Slunga, R.

    1990-06-01

    More than 200 earthquakes in the Baltic Shield area in the size range ML 0.6-4.5 have been studied by dense regional seismic networks. The analysis includes focal depths, dynamic source parameters, and fault plane solutions. In southern Sweden a long part of the Protogene zone marks a change in the seismic activity. The focal depths indicate three crustal layers: Upper crust (0-18 km in southern Sweden, 0-13 km in northern Sweden), middle crust down to 35 km, and the quiet lower crust. The fault plane solutions show that strike-slip is dominating. Along the Tornquist line significant normal faulting occurs. The stresses released by the earthquakes show a remarkable consistency with a regional principle compression N60W. This indicates that plate-tectonic processes are more important than the land uplift. The spatial distribution is consistent with a model where the earthquakes are breakdowns of asperities on normally stably sliding faults. The aseismic sliding is estimated to be 2000 times more extensive than the seismic sliding. Southern Sweden is estimated to deform horizontally at a rate of 1 mm/year or more. (orig.)

  11. Evaluation of droplet size distributions using univariate and multivariate approaches

    DEFF Research Database (Denmark)

    Gauno, M.H.; Larsen, C.C.; Vilhelmsen, T.

    2013-01-01

    of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose...... in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution....... Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions...

  12. Evaluation of earthquake vibration on aseismic design of nuclear power plant judging from recent earthquakes

    International Nuclear Information System (INIS)

    Dan, Kazuo

    2006-01-01

    The Regulatory Guide for Aseismic Design of Nuclear Reactor Facilities was revised on 19 th September, 2006. Six factors for evaluation of earthquake vibration are considered on the basis of the recent earthquakes. They are 1) evaluation of earthquake vibration by method using fault model, 2) investigation and approval of active fault, 3) direct hit earthquake, 4) assumption of the short active fault as the hypocentral fault, 5) locality of the earthquake and the earthquake vibration and 6) remaining risk. A guiding principle of revision required new evaluation method of earthquake vibration using fault model, and evaluation of probability of earthquake vibration. The remaining risk means the facilities and people get into danger when stronger earthquake than the design occurred, accordingly, the scattering has to be considered at evaluation of earthquake vibration. The earthquake belt of Hyogo-Nanbu earthquake and strong vibration pulse in 1995, relation between length of surface earthquake fault and hypocentral fault, and distribution of seismic intensity of off Kushiro in 1993 are shown. (S.Y.)

  13. Ground motion response to an ML 4.3 earthquake using co-located distributed acoustic sensing and seismometer arrays

    Science.gov (United States)

    Wang, Herbert F.; Zeng, Xiangfang; Miller, Douglas E.; Fratta, Dante; Feigl, Kurt L.; Thurber, Clifford H.; Mellors, Robert J.

    2018-06-01

    The PoroTomo research team deployed two arrays of seismic sensors in a natural laboratory at Brady Hot Springs, Nevada in March 2016. The 1500 m (length) × 500 m (width) × 400 m (depth) volume of the laboratory overlies a geothermal reservoir. The distributed acoustic sensing (DAS) array consisted of about 8400 m of fiber-optic cable in a shallow trench and 360 m in a well. The conventional seismometer array consisted of 238 shallowly buried three-component geophones. The DAS cable was laid out in three parallel zig-zag lines with line segments approximately 100 m in length and geophones were spaced at approximately 60 m intervals. Both DAS and conventional geophones recorded continuously over 15 d during which a moderate-sized earthquake with a local magnitude of 4.3 was recorded on 2016 March 21. Its epicentre was approximately 150 km south-southeast of the laboratory. Several DAS line segments with co-located geophone stations were used to compare signal-to-noise ratios (SNRs) in both time and frequency domains and to test relationships between DAS and geophone data. The ratios were typically within a factor of five of each other with DAS SNR often greater for P-wave but smaller for S-wave relative to geophone SNR. The SNRs measured for an earthquake can be better than for active sources because the earthquake signal contains more low-frequency energy and the noise level is also lower at those lower frequencies. Amplitudes of the sum of several DAS strain-rate waveforms matched the finite difference of two geophone waveforms reasonably well, as did the amplitudes of DAS strain waveforms with particle-velocity waveforms recorded by geophones. Similar agreement was found between DAS and geophone observations and synthetic strain seismograms. The combination of good SNR in the seismic frequency band, high-spatial density, large N and highly accurate time control among individual sensors suggests that DAS arrays have potential to assume a role in earthquake

  14. Concentration and size distribution of particles in abstracted groundwater.

    Science.gov (United States)

    van Beek, C G E M; de Zwart, A H; Balemans, M; Kooiman, J W; van Rosmalen, C; Timmer, H; Vandersluys, J; Stuyfzand, P J

    2010-02-01

    Particle number concentrations have been counted and particle size distributions calculated in groundwater derived by abstraction wells. Both concentration and size distribution are governed by the discharge rate: the higher this rate the higher the concentration and the higher the proportion of larger particles. However, the particle concentration in groundwater derived from abstraction wells, with high groundwater flow velocities, is much lower than in groundwater from monitor wells, with minimal flow velocities. This inconsistency points to exhaustion of the particle supply in the aquifer around wells due to groundwater abstraction for many years. The particle size distribution can be described with the help of a power law or Pareto distribution. Comparing the measured particle size distribution with the Pareto distribution shows that particles with a diameter >7 microm are under-represented. As the particle size distribution is dependent on the flow velocity, so is the value of the "Pareto" slope beta. (c) 2009 Elsevier Ltd. All rights reserved.

  15. Phase size distribution in WC/Co hardmetal

    International Nuclear Information System (INIS)

    Roebuck, B.; Bennett, E.G.

    1986-01-01

    A high-resolution field emission scanning electron microscope was used to perform accurate quantitative metallography on a variety of WC/Co hardmetals. Particular attention was paid to obtaining the mean size and size distribution of the cobalt phase by linear analysis. Cobalt regions are frequently submicron and difficult to resolve adequately by conventional methods. The WC linear intercept distributions, and contiguity were also measured at the same time. The results were used to examine the validity of theoretic derivations of cobalt intercept size

  16. Size distribution measurements and chemical analysis of aerosol components

    Energy Technology Data Exchange (ETDEWEB)

    Pakkanen, T.A.

    1995-12-31

    The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted

  17. Interpretations of family size distributions: The Datura example

    Science.gov (United States)

    Henych, Tomáš; Holsapple, Keith A.

    2018-04-01

    Young asteroid families are unique sources of information about fragmentation physics and the structure of their parent bodies, since their physical properties have not changed much since their birth. Families have different properties such as age, size, taxonomy, collision severity and others, and understanding the effect of those properties on our observations of the size-frequency distribution (SFD) of family fragments can give us important insights into the hypervelocity collision processes at scales we cannot achieve in our laboratories. Here we take as an example the very young Datura family, with a small 8-km parent body, and compare its size distribution to other families, with both large and small parent bodies, and created by both catastrophic and cratering formation events. We conclude that most likely explanation for the shallower size distribution compared to larger families is a more pronounced observational bias because of its small size. Its size distribution is perfectly normal when its parent body size is taken into account. We also discuss some other possibilities. In addition, we study another common feature: an offset or "bump" in the distribution occurring for a few of the larger elements. We hypothesize that it can be explained by a newly described regime of cratering, "spall cratering", which controls the majority of impact craters on the surface of small asteroids like Datura.

  18. Particle size distribution instrument. Topical report 13

    Energy Technology Data Exchange (ETDEWEB)

    Okhuysen, W.; Gassaway, J.D.

    1995-04-01

    The development of an instrument to measure the concentration of particles in gas is described in this report. An in situ instrument was designed and constructed which sizes individual particles and counts the number of occurrences for several size classes. Although this instrument was designed to detect the size distribution of slag and seed particles generated at an experimental coal-fired magnetohydrodynamic power facility, it can be used as a nonintrusive diagnostic tool for other hostile industrial processes involving the formation and growth of particulates. Two of the techniques developed are extensions of the widely used crossed beam velocimeter, providing simultaneous measurement of the size distribution and velocity of articles.

  19. Changes of firm size distribution: The case of Korea

    Science.gov (United States)

    Kang, Sang Hoon; Jiang, Zhuhua; Cheong, Chongcheul; Yoon, Seong-Min

    2011-01-01

    In this paper, the distribution and inequality of firm sizes is evaluated for the Korean firms listed on the stock markets. Using the amount of sales, total assets, capital, and the number of employees, respectively, as a proxy for firm sizes, we find that the upper tail of the Korean firm size distribution can be described by power-law distributions rather than lognormal distributions. Then, we estimate the Zipf parameters of the firm sizes and assess the changes in the magnitude of the exponents. The results show that the calculated Zipf exponents over time increased prior to the financial crisis, but decreased after the crisis. This pattern implies that the degree of inequality in Korean firm sizes had severely deepened prior to the crisis, but lessened after the crisis. Overall, the distribution of Korean firm sizes changes over time, and Zipf’s law is not universal but does hold as a special case.

  20. Precisely locating the Klamath Falls, Oregon, earthquakes

    Science.gov (United States)

    Qamar, A.; Meagher, K.L.

    1993-01-01

    The Klamath Falls earthquakes on September 20, 1993, were the largest earthquakes centered in Oregon in more than 50 yrs. Only the magnitude 5.75 Milton-Freewater earthquake in 1936, which was centered near the Oregon-Washington border and felt in an area of about 190,000 sq km, compares in size with the recent Klamath Falls earthquakes. Although the 1993 earthquakes surprised many local residents, geologists have long recognized that strong earthquakes may occur along potentially active faults that pass through the Klamath Falls area. These faults are geologically related to similar faults in Oregon, Idaho, and Nevada that occasionally spawn strong earthquakes

  1. Geological and historical evidence of irregular recurrent earthquakes in Japan.

    Science.gov (United States)

    Satake, Kenji

    2015-10-28

    Great (M∼8) earthquakes repeatedly occur along the subduction zones around Japan and cause fault slip of a few to several metres releasing strains accumulated from decades to centuries of plate motions. Assuming a simple 'characteristic earthquake' model that similar earthquakes repeat at regular intervals, probabilities of future earthquake occurrence have been calculated by a government committee. However, recent studies on past earthquakes including geological traces from giant (M∼9) earthquakes indicate a variety of size and recurrence interval of interplate earthquakes. Along the Kuril Trench off Hokkaido, limited historical records indicate that average recurrence interval of great earthquakes is approximately 100 years, but the tsunami deposits show that giant earthquakes occurred at a much longer interval of approximately 400 years. Along the Japan Trench off northern Honshu, recurrence of giant earthquakes similar to the 2011 Tohoku earthquake with an interval of approximately 600 years is inferred from historical records and tsunami deposits. Along the Sagami Trough near Tokyo, two types of Kanto earthquakes with recurrence interval of a few hundred years and a few thousand years had been recognized, but studies show that the recent three Kanto earthquakes had different source extents. Along the Nankai Trough off western Japan, recurrence of great earthquakes with an interval of approximately 100 years has been identified from historical literature, but tsunami deposits indicate that the sizes of the recurrent earthquakes are variable. Such variability makes it difficult to apply a simple 'characteristic earthquake' model for the long-term forecast, and several attempts such as use of geological data for the evaluation of future earthquake probabilities or the estimation of maximum earthquake size in each subduction zone are being conducted by government committees. © 2015 The Author(s).

  2. Effect of particle size distribution on sintering of tungsten

    International Nuclear Information System (INIS)

    Patterson, B.R.; Griffin, J.A.

    1984-01-01

    To date, very little is known about the effect of the nature of the particle size distribution on sintering. It is reasonable that there should be an effect of size distribution, and theory and prior experimental work examining the effects of variations in bimodal and continuous distributions have shown marked effects on sintering. Most importantly, even with constant mean particle size, variations in distribution width, or standard deviation, have been shown to produce marked variations in microstructure and sintering rate. In the latter work, in which spherical copper powders were blended to produce lognormal distributions of constant geometric mean particle size by weight frequency, blends with larger values of geometric standard deviation, 1nσ, sintered more rapidly. The goals of the present study were to examine in more detail the effects of variations in the width of lognormal particle size distributions of tungsten powder and determine the effects of 1nσ on the microstructural evolution during sintering

  3. Size-biased distributions in the generalized beta distribution family, with applications to forestry

    Science.gov (United States)

    Mark J. Ducey; Jeffrey H. Gove

    2015-01-01

    Size-biased distributions arise in many forestry applications, as well as other environmental, econometric, and biomedical sampling problems. We examine the size-biased versions of the generalized beta of the first kind, generalized beta of the second kind and generalized gamma distributions. These distributions include, as special cases, the Dagum (Burr Type III),...

  4. Aerosol Size Distributions In Auckland.

    Czech Academy of Sciences Publication Activity Database

    Coulson, G.; Olivares, G.; Talbot, Nicholas

    2016-01-01

    Roč. 50, č. 1 (2016), s. 23-28 E-ISSN 1836-5876 Institutional support: RVO:67985858 Keywords : aerosol size distribution * particle number concentration * roadside Subject RIV: CF - Physical ; Theoretical Chemistry

  5. Ionospheric Anomaly before Kyushu|Japan Earthquake

    Directory of Open Access Journals (Sweden)

    YANG Li

    2017-05-01

    Full Text Available GIM data released by IGS is used in the article and a new method of combining the Sliding Time Window Method and the Ionospheric TEC correlation analysis method of adjacent grid points is proposed to study the relationship between pre-earthquake ionospheric anomalies and earthquake. By analyzing the abnormal change of TEC in the 5 grid points around the seismic region, the abnormal change of ionospheric TEC is found before the earthquake and the correlation between the TEC sequences of lattice points is significantly affected by earthquake. Based on the analysis of the spatial distribution of TEC anomaly, anomalies of 6 h, 12 h and 6 h were found near the epicenter three days before the earthquake. Finally, ionospheric tomographic technology is used to do tomographic inversion on electron density. And the distribution of the electron density in the ionospheric anomaly is further analyzed.

  6. Influence of movement regime of stick-slip process on the size distribution of accompanying acoustic emission characteristics

    Science.gov (United States)

    Matcharashvili, Teimuraz; Chelidze, Tamaz; Zhukova, Natalia; Mepharidze, Ekaterine; Sborshchikov, Alexander

    2010-05-01

    Many scientific works on dynamics of earthquake generation are devoted to qualitative and quantitative reproduction of behavior of seismic faults. Number of theoretical, numerical or physical models are already designed for this purpose. Main assumption of these works is that the correct model must be capable to reproduce power law type relation for event sizes with magnitudes greater than or equal to a some threshold value, similar to Gutenberg-Richter (GR) law for the size distribution of earthquakes. To model behavior of a seismic faults in laboratory conditions spring-block experimental systems are often used. They enable to generate stick-slip movement, intermittent behavior occurring when two solids in contact slide relative to each other driven at a constant velocity. Wide interest to such spring-block models is caused by the fact that stick-slip is recognized as a basic process underlying earthquakes generation along pre-existing faults. It is worth to mention, that in stick slip experiments reproduction of power law, in slip events size distribution, with b values close or equal to the one found for natural seismicity is possible. Stick-slip process observed in these experimental models is accompanied by a transient elastic waves propagation generated during the rapid release of stress energy in spring-block system. Oscillations of stress energy can be detected as a characteristic acoustic emission (AE). Accompanying stick slip AE is the subject of intense investigation, but many aspects of this process are still unclear. In the present research we aimed to investigate dynamics of stick slip AE in order to find whether its distributional properties obey power law. Experiments have been carried out on spring-block system consisting of fixed and sliding plates of roughly finished basalt samples. The sliding block was driven with a constant velocity. Experiments have been carried out for five different stiffness of pulling spring. Thus five different regimes

  7. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    Science.gov (United States)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  8. Elemental mass size distribution of the Debrecen urban aerosol

    International Nuclear Information System (INIS)

    Kertesz, Zs.; Szoboszlai, Z.; Dobos, E.; Borbely-Kiss, I.

    2007-01-01

    Complete text of publication follows. Size distribution is one of the basic properties of atmospheric aerosol. It is closely related to the origin, chemical composition and age of the aerosol particles, and it influences the optical properties, environmental effects and health impact of aerosol. As part of the ongoing aerosol research in the Group of Ion Beam Applications of the Atomki, elemental mass size distribution of urban aerosol were determined using particle induced X-ray emission (PIXE) analytical technique. Aerosol sampling campaigns were carried out with 9-stage PIXE International cascade impactors, which separates the aerosol into 10 size fractions in the 0.05-30 ?m range. Five 48-hours long samplings were done in the garden of the Atomki, in April and in October, 2007. Both campaigns included weekend and working day samplings. Basically two different kinds of particles could be identified according to the size distribution. In the size distribution of Al, Si, Ca, Fe, Ba, Ti, Mn and Co one dominant peak can be found around the 3 m aerodynamic diameter size range, as it is shown on Figure 1. These are the elements of predominantly natural origin. Elements like S, Cl, K, Zn, Pb and Br appears with high frequency in the 0.25-0.5 mm size range as presented in Figure 2. These elements are originated mainly from anthropogenic sources. However sometimes in the size distribution of these elements a 2 nd , smaller peak appears at the 2-4 μm size ranges, indicating different sources. Differences were found between the size distribution of the spring and autumn samples. In the case of elements of soil origin the size distribution was shifted towards smaller diameters during October, and a 2 nd peak appeared around 0.5 μm. A possible explanation to this phenomenon can be the different meteorological conditions. No differences were found between the weekend and working days in the size distribution, however the concentration values were smaller during the weekend

  9. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  10. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    Science.gov (United States)

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  11. The evaluation of the earthquake hazard using the exponential distribution method for different seismic source regions in and around Ağrı

    Energy Technology Data Exchange (ETDEWEB)

    Bayrak, Yusuf, E-mail: ybayrak@agri.edu.tr [Ağrı İbrahim Çeçen University, Ağrı/Turkey (Turkey); Türker, Tuğba, E-mail: tturker@ktu.edu.tr [Karadeniz Technical University, Department of Geophysics, Trabzon/Turkey (Turkey)

    2016-04-18

    The aim of this study; were determined of the earthquake hazard using the exponential distribution method for different seismic sources of the Ağrı and vicinity. A homogeneous earthquake catalog has been examined for 1900-2015 (the instrumental period) with 456 earthquake data for Ağrı and vicinity. Catalog; Bogazici University Kandilli Observatory and Earthquake Research Institute (Burke), National Earthquake Monitoring Center (NEMC), TUBITAK, TURKNET the International Seismological Center (ISC), Seismological Research Institute (IRIS) has been created using different catalogs like. Ağrı and vicinity are divided into 7 different seismic source regions with epicenter distribution of formed earthquakes in the instrumental period, focal mechanism solutions, and existing tectonic structures. In the study, the average magnitude value are calculated according to the specified magnitude ranges for 7 different seismic source region. According to the estimated calculations for 7 different seismic source regions, the biggest difference corresponding with the classes of determined magnitudes between observed and expected cumulative probabilities are determined. The recurrence period and earthquake occurrence number per year are estimated of occurring earthquakes in the Ağrı and vicinity. As a result, 7 different seismic source regions are determined occurrence probabilities of an earthquake 3.2 magnitude, Region 1 was greater than 6.7 magnitude, Region 2 was greater than than 4.7 magnitude, Region 3 was greater than 5.2 magnitude, Region 4 was greater than 6.2 magnitude, Region 5 was greater than 5.7 magnitude, Region 6 was greater than 7.2 magnitude, Region 7 was greater than 6.2 magnitude. The highest observed magnitude 7 different seismic source regions of Ağrı and vicinity are estimated 7 magnitude in Region 6. Region 6 are determined according to determining magnitudes, occurrence years of earthquakes in the future years, respectively, 7.2 magnitude was in 158

  12. Influence of particle size distributions on magnetorheological fluid performances

    International Nuclear Information System (INIS)

    Chiriac, H; Stoian, G

    2010-01-01

    In this paper we investigate the influence that size distributions of the magnetic particles might have on the magnetorheological fluid performances. In our study, several size distributions have been tailored first by sieving a micrometric Fe powder in order to obtain narrow distribution powders and then by recomposing the new size distributions (different from Gaussian). We used spherical Fe particles (mesh -325) commercially available. The powder was sieved by means of a sieve shaker using a series of sieves with the following mesh size: 20, 32, 40, 50, 63, 80 micrometers. All magnetic powders were characterized through Vibrating Sample Magnetometer (VSM) measurements, particle size analysis and also Scanning Electron Microscope (SEM) images were taken. Magnetorheological (MR) fluids based on the resulted magnetic powders were prepared and studied by means of a rheometer with a magnetorheological module. The MR fluids were measured in magnetic field and in zero magnetic field as well. As we noticed in our previous experiments particles size distribution can also influence the MR fluids performances.

  13. Twitter earthquake detection: Earthquake monitoring in a social world

    Science.gov (United States)

    Earle, Paul S.; Bowden, Daniel C.; Guy, Michelle R.

    2011-01-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets) with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word "earthquake" clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  14. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    Science.gov (United States)

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  15. Earthquake correlations and networks: A comparative study

    International Nuclear Information System (INIS)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-01-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E 69, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  16. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    Science.gov (United States)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  17. Molecular size distribution of Np(V)-humate

    International Nuclear Information System (INIS)

    Sakamoto, Yoshiaki; Nagao, Seiya; Tanaka, Tadao

    1996-10-01

    Molecular size distributions of humic acid and Np(V)-humate were studied as a function of pH and an ionic strength by an ultrafiltration method. Small particle (10,000-30,000 daltons) of humic acid increased slightly with increases in solution pH. The ion strength dependence of the molecular size distribution was clearly observed for humic acid. The abundance ratio of humic acid in the range from 10,000 to 30,000 daltons increased with the ionic strength from 0.015 M to 0.105 M, in place of the decreasing of that in range from 30,000 to 100,000 daltons. Most of neptunium(V) in the 200 mg/l of the humic acid solution was fractionated into 10,000-30,000 daltons. The abundance ratio of neptunium(V) in the 10,000-30,000 daltons was not clearly dependent on pH and the ionic strength of the solution, in spite of the changing in the molecular size distribution of humic acid by the ionic strength. These results imply that the molecular size distribution of Np(V)-humate does not simply obey by that of the humic acid. Stability constant of Np(V)-humate was measured as a function of the molecular size of the humic acid. The stability constant of Np(V)-humate in the range from 10,000 to 30,000 daltons was highest value comparing with the constants in the molecular size ranges of 100,000 daltons-0.45μm, 30,000-100,000, 5,000-10,000 daltons and under 5,000 daltons. These results may indicate that the Np(V) complexation with humic acid is dominated by the interaction of neptunyl ion with the humic acid in the specific molecular size range. (author)

  18. XRD characterisation of nanoparticle size and shape distributions

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.; Bonevich, J.

    2004-01-01

    Full text: The form of XRD lines and the extent of their broadening provide useful structural information about the shape, size distribution, and modal characteristics of the nanoparticles comprising the specimen. Also, the defect content of the nanoparticles can be determined, including the type, dislocation density, and stacking faults/twinning. This information is convoluted together and can be grouped into 'size' and 'defect' broadening contributions. Modern X-ray diffraction analysis techniques have concentrated on quantifying the broadening arising from the size and defect contributions, while accounting for overlapping of profiles, instrumental broadening, background scattering and noise components. We report on a combined Bayesian/Maximum Entropy (MaxEnt) technique developed for use in the certification of a NIST Standard Reference Material (SRM) for size-broadened line profiles. The approach used was chosen because of its generality in removing instrumental broadening from the observed line profiles, and its ability to determine not only the average crystallite size, but also the distribution of sizes and the average shape of crystallites. Moverover, this Bayesian/MaxEnt technique is fully quantitative, in that it also determines uncertainties in the crystallite-size distribution and other parameters. Both experimental and numerical simulations of size broadened line-profiles modelled on a range of specimens with spherical and non-spherical morphologies are presented to demonstrate how this information can be retrieved from the line profile data. The sensitivity of the Bayesian/MaxEnt method to determining the size distribution using varying a priori information are emphasised and discussed

  19. Mass size distribution of particle-bound water

    Science.gov (United States)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  20. Rapid post-earthquake modelling of coseismic landslide intensity and distribution for emergency response decision support

    Directory of Open Access Journals (Sweden)

    T. R. Robinson

    2017-09-01

    Full Text Available Current methods to identify coseismic landslides immediately after an earthquake using optical imagery are too slow to effectively inform emergency response activities. Issues with cloud cover, data collection and processing, and manual landslide identification mean even the most rapid mapping exercises are often incomplete when the emergency response ends. In this study, we demonstrate how traditional empirical methods for modelling the total distribution and relative intensity (in terms of point density of coseismic landsliding can be successfully undertaken in the hours and days immediately after an earthquake, allowing the results to effectively inform stakeholders during the response. The method uses fuzzy logic in a GIS (Geographic Information Systems to quickly assess and identify the location-specific relationships between predisposing factors and landslide occurrence during the earthquake, based on small initial samples of identified landslides. We show that this approach can accurately model both the spatial pattern and the number density of landsliding from the event based on just several hundred mapped landslides, provided they have sufficiently wide spatial coverage, improving upon previous methods. This suggests that systematic high-fidelity mapping of landslides following an earthquake is not necessary for informing rapid modelling attempts. Instead, mapping should focus on rapid sampling from the entire affected area to generate results that can inform the modelling. This method is therefore suited to conditions in which imagery is affected by partial cloud cover or in which the total number of landslides is so large that mapping requires significant time to complete. The method therefore has the potential to provide a quick assessment of landslide hazard after an earthquake and may therefore inform emergency operations more effectively compared to current practice.

  1. Comparison of four moderate-size earthquakes in southern California using seismology and InSAR

    Science.gov (United States)

    Mellors, R.J.; Magistrale, H.; Earle, P.; Cogbill, A.H.

    2004-01-01

    Source parameters determined from interferometric synthetic aperture radar (InSAR) measurements and from seismic data are compared from four moderate-size (less than M 6) earthquakes in southern California. The goal is to verify approximate detection capabilities of InSAR, assess differences in the results, and test how the two results can be reconciled. First, we calculated the expected surface deformation from all earthquakes greater than magnitude 4 in areas with available InSAR data (347 events). A search for deformation from the events in the interferograms yielded four possible events with magnitudes less than 6. The search for deformation was based on a visual inspection as well as cross-correlation in two dimensions between the measured signal and the expected signal. A grid-search algorithm was then used to estimate focal mechanism and depth from the InSAR data. The results were compared with locations and focal mechanisms from published catalogs. An independent relocation using seismic data was also performed. The seismic locations fell within the area of the expected rupture zone for the three events that show clear surface deformation. Therefore, the technique shows the capability to resolve locations with high accuracy and is applicable worldwide. The depths determined by InSAR agree with well-constrained seismic locations determined in a 3D velocity model. Depth control for well-imaged shallow events using InSAR data is good, and better than the seismic constraints in some cases. A major difficulty for InSAR analysis is the poor temporal coverage of InSAR data, which may make it impossible to distinguish deformation due to different earthquakes at the same location.

  2. Firm-size distribution and price-cost margins in Dutch manufacturing

    NARCIS (Netherlands)

    Y.M. Prince (Yvonne); A.R. Thurik (Roy)

    1993-01-01

    textabstractIndustrial economists surmise a relation between the size distribution of firms and performance. Usually, attention is focused on the high end of the size distribution. The widely used 4-firm seller concentration, C4, ignores what happens at the low end of the size distribution. An

  3. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    Science.gov (United States)

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  4. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro; Mai, Paul Martin; Yasuda, Tomohiro; Mori, Nobuhito

    2014-01-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  5. Sensitivity of tsunami wave profiles and inundation simulations to earthquake slip and fault geometry for the 2011 Tohoku earthquake

    KAUST Repository

    Goda, Katsuichiro

    2014-09-01

    In this study, we develop stochastic random-field slip models for the 2011 Tohoku earthquake and conduct a rigorous sensitivity analysis of tsunami hazards with respect to the uncertainty of earthquake slip and fault geometry. Synthetic earthquake slip distributions generated from the modified Mai-Beroza method captured key features of inversion-based source representations of the mega-thrust event, which were calibrated against rich geophysical observations of this event. Using original and synthesised earthquake source models (varied for strike, dip, and slip distributions), tsunami simulations were carried out and the resulting variability in tsunami hazard estimates was investigated. The results highlight significant sensitivity of the tsunami wave profiles and inundation heights to the coastal location and the slip characteristics, and indicate that earthquake slip characteristics are a major source of uncertainty in predicting tsunami risks due to future mega-thrust events.

  6. Chilean megathrust earthquake recurrence linked to frictional contrast at depth

    Science.gov (United States)

    Moreno, M.; Li, S.; Melnick, D.; Bedford, J. R.; Baez, J. C.; Motagh, M.; Metzger, S.; Vajedian, S.; Sippl, C.; Gutknecht, B. D.; Contreras-Reyes, E.; Deng, Z.; Tassara, A.; Oncken, O.

    2018-04-01

    Fundamental processes of the seismic cycle in subduction zones, including those controlling the recurrence and size of great earthquakes, are still poorly understood. Here, by studying the 2016 earthquake in southern Chile—the first large event within the rupture zone of the 1960 earthquake (moment magnitude (Mw) = 9.5)—we show that the frictional zonation of the plate interface fault at depth mechanically controls the timing of more frequent, moderate-size deep events (Mw shallow earthquakes (Mw > 8.5). We model the evolution of stress build-up for a seismogenic zone with heterogeneous friction to examine the link between the 2016 and 1960 earthquakes. Our results suggest that the deeper segments of the seismogenic megathrust are weaker and interseismically loaded by a more strongly coupled, shallower asperity. Deeper segments fail earlier ( 60 yr recurrence), producing moderate-size events that precede the failure of the shallower region, which fails in a great earthquake (recurrence >110 yr). We interpret the contrasting frictional strength and lag time between deeper and shallower earthquakes to be controlled by variations in pore fluid pressure. Our integrated analysis strengthens understanding of the mechanics and timing of great megathrust earthquakes, and therefore could aid in the seismic hazard assessment of other subduction zones.

  7. Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions

    Science.gov (United States)

    Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.

    2014-12-01

    The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.

  8. RAPID EXTRACTION OF LANDSLIDE AND SPATIAL DISTRIBUTION ANALYSIS AFTER JIUZHAIGOU Ms7.0 EARTHQUAKE BASED ON UAV IMAGES

    Directory of Open Access Journals (Sweden)

    Q. S. Jiao

    2018-04-01

    Full Text Available Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.

  9. Rapid Extraction of Landslide and Spatial Distribution Analysis after Jiuzhaigou Ms7.0 Earthquake Based on Uav Images

    Science.gov (United States)

    Jiao, Q. S.; Luo, Y.; Shen, W. H.; Li, Q.; Wang, X.

    2018-04-01

    Jiuzhaigou earthquake led to the collapse of the mountains and formed lots of landslides in Jiuzhaigou scenic spot and surrounding roads which caused road blockage and serious ecological damage. Due to the urgency of the rescue, the authors carried unmanned aerial vehicle (UAV) and entered the disaster area as early as August 9 to obtain the aerial images near the epicenter. On the basis of summarizing the earthquake landslides characteristics in aerial images, by using the object-oriented analysis method, landslides image objects were obtained by multi-scale segmentation, and the feature rule set of each level was automatically built by SEaTH (Separability and Thresholds) algorithm to realize the rapid landslide extraction. Compared with visual interpretation, object-oriented automatic landslides extraction method achieved an accuracy of 94.3 %. The spatial distribution of the earthquake landslide had a significant positive correlation with slope and relief and had a negative correlation with the roughness, but no obvious correlation with the aspect. The relationship between the landslide and the aspect was not found and the probable reason may be that the distance between the study area and the seismogenic fault was too far away. This work provided technical support for the earthquake field emergency, earthquake landslide prediction and disaster loss assessment.

  10. The 2010 Chile Earthquake: Rapid Assessments of Tsunami

    OpenAIRE

    Michelini, A.; Lauciani, V.; Selvaggi, G.; Lomax, A.

    2010-01-01

    After an earthquake underwater, rapid real-time assessment of earthquake parameters is important for emergency response related to infrastructure damage and, perhaps more exigently, for issuing warnings of the possibility of an impending tsunami. Since 2005, the Istituto Nazionale di Geofisica e Vulcanologia (INGV) has worked on the rapid quantification of earthquake magnitude and tsunami potential, especially for the Mediterranean area. This work includes quantification of earthquake size fr...

  11. Earthquake correlations and networks: A comparative study

    Science.gov (United States)

    Krishna Mohan, T. R.; Revathi, P. G.

    2011-04-01

    We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

  12. A minimalist model of characteristic earthquakes

    DEFF Research Database (Denmark)

    Vázquez-Prada, M.; González, Á.; Gómez, J.B.

    2002-01-01

    In a spirit akin to the sandpile model of self- organized criticality, we present a simple statistical model of the cellular-automaton type which simulates the role of an asperity in the dynamics of a one-dimensional fault. This model produces an earthquake spectrum similar to the characteristic-earthquake...... behaviour of some seismic faults. This model, that has no parameter, is amenable to an algebraic description as a Markov Chain. This possibility illuminates some important results, obtained by Monte Carlo simulations, such as the earthquake size-frequency relation and the recurrence time...... of the characteristic earthquake....

  13. A new Bayesian Inference-based Phase Associator for Earthquake Early Warning

    Science.gov (United States)

    Meier, Men-Andrin; Heaton, Thomas; Clinton, John; Wiemer, Stefan

    2013-04-01

    State of the art network-based Earthquake Early Warning (EEW) systems can provide warnings for large magnitude 7+ earthquakes. Although regions in the direct vicinity of the epicenter will not receive warnings prior to damaging shaking, real-time event characterization is available before the destructive S-wave arrival across much of the strongly affected region. In contrast, in the case of the more frequent medium size events, such as the devastating 1994 Mw6.7 Northridge, California, earthquake, providing timely warning to the smaller damage zone is more difficult. For such events the "blind zone" of current systems (e.g. the CISN ShakeAlert system in California) is similar in size to the area over which severe damage occurs. We propose a faster and more robust Bayesian inference-based event associator, that in contrast to the current standard associators (e.g. Earthworm Binder), is tailored to EEW and exploits information other than only phase arrival times. In particular, the associator potentially allows for reliable automated event association with as little as two observations, which, compared to the ShakeAlert system, would speed up the real-time characterizations by about ten seconds and thus reduce the blind zone area by up to 80%. We compile an extensive data set of regional and teleseismic earthquake and noise waveforms spanning a wide range of earthquake magnitudes and tectonic regimes. We pass these waveforms through a causal real-time filterbank with passband filters between 0.1 and 50Hz, and, updating every second from the event detection, extract the maximum amplitudes in each frequency band. Using this dataset, we define distributions of amplitude maxima in each passband as a function of epicentral distance and magnitude. For the real-time data, we pass incoming broadband and strong motion waveforms through the same filterbank and extract an evolving set of maximum amplitudes in each passband. We use the maximum amplitude distributions to check

  14. Geodetically resolved slip distribution of the 27 August 2012 Mw=7.3 El Salvador earthquake

    Science.gov (United States)

    Geirsson, H.; La Femina, P. C.; DeMets, C.; Hernandez, D. A.; Mattioli, G. S.; Rogers, R.; Rodriguez, M.

    2013-12-01

    On 27 August 2012 a Mw=7.3 earthquake occurred offshore of Central America causing a small tsunami in El Salvador and Nicaragua but little damage otherwise. This is the largest magnitude earthquake in this area since 2001. We use co-seismic displacements estimated from episodic and continuous GPS station time series to model the magnitude and spatial variability of slip for this event. The estimated surface displacements are small (earthquake. We use TDEFNODE to model the displacements using two different modeling approaches. In the first model, we solve for homogeneous slip on free rectangular fault(s), and in the second model we solve for distributed slip on the main thrust, realized using different slab models. The results indicate that we can match the seismic moment release, with models indicating rupture of a large area, with a low magnitude of slip. The slip is at shallow-to-intermediate depths on the main thrust off the coast of El Salvador. Additionally, we observe a deeper region of slip to the east, that reaches towards the Gulf of Fonseca between El Salvador and Nicaragua. The observed tsunami additionally indicates near-trench rupture off the coast of El Salvador. The duration of the rupturing is estimated from seismic data to be 70 s, which indicates a slow rupture process. Since the geodetic moment we obtain agrees with the seismic moment, this indicates that the earthquake was not associated with aseismic slip.

  15. A multivariate rank test for comparing mass size distributions

    KAUST Repository

    Lombard, F.

    2012-04-01

    Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases, to minimize plant losses due to an excess of oversize or undersize material or to attain a size distribution that fits a contractual specification. The problem addressed in the present paper is how to test the equality of two or more underlying size distributions. A distinguishing feature of these size distributions is that they are not based on counts of individual particles. Rather, they are mass size distributions giving the fractions of the total mass of a sampled material lying in each of a number of size intervals. As such, the data are compositional in nature, using the terminology of Aitchison [1] that is, multivariate vectors the components of which add to 100%. In the literature, various versions of Hotelling\\'s T 2 have been used to compare matched pairs of such compositional data. In this paper, we propose a robust test procedure based on ranks as a competitor to Hotelling\\'s T 2. In contrast to the latter statistic, the power of the rank test is not unduly affected by the presence of outliers or of zeros among the data. © 2012 Copyright Taylor and Francis Group, LLC.

  16. Laboratory generated M -6 earthquakes

    Science.gov (United States)

    McLaskey, Gregory C.; Kilgore, Brian D.; Lockner, David A.; Beeler, Nicholas M.

    2014-01-01

    We consider whether mm-scale earthquake-like seismic events generated in laboratory experiments are consistent with our understanding of the physics of larger earthquakes. This work focuses on a population of 48 very small shocks that are foreshocks and aftershocks of stick–slip events occurring on a 2.0 m by 0.4 m simulated strike-slip fault cut through a large granite sample. Unlike the larger stick–slip events that rupture the entirety of the simulated fault, the small foreshocks and aftershocks are contained events whose properties are controlled by the rigidity of the surrounding granite blocks rather than characteristics of the experimental apparatus. The large size of the experimental apparatus, high fidelity sensors, rigorous treatment of wave propagation effects, and in situ system calibration separates this study from traditional acoustic emission analyses and allows these sources to be studied with as much rigor as larger natural earthquakes. The tiny events have short (3–6 μs) rise times and are well modeled by simple double couple focal mechanisms that are consistent with left-lateral slip occurring on a mm-scale patch of the precut fault surface. The repeatability of the experiments indicates that they are the result of frictional processes on the simulated fault surface rather than grain crushing or fracture of fresh rock. Our waveform analysis shows no significant differences (other than size) between the M -7 to M -5.5 earthquakes reported here and larger natural earthquakes. Their source characteristics such as stress drop (1–10 MPa) appear to be entirely consistent with earthquake scaling laws derived for larger earthquakes.

  17. Twitter earthquake detection: earthquake monitoring in a social world

    Directory of Open Access Journals (Sweden)

    Daniel C. Bowden

    2011-06-01

    Full Text Available The U.S. Geological Survey (USGS is investigating how the social networking site Twitter, a popular service for sending and receiving short, public text messages, can augment USGS earthquake response products and the delivery of hazard information. Rapid detection and qualitative assessment of shaking events are possible because people begin sending public Twitter messages (tweets with in tens of seconds after feeling shaking. Here we present and evaluate an earthquake detection procedure that relies solely on Twitter data. A tweet-frequency time series constructed from tweets containing the word “earthquake” clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a short-term-average, long-term-average algorithm. When tuned to a moderate sensitivity, the detector finds 48 globally-distributed earthquakes with only two false triggers in five months of data. The number of detections is small compared to the 5,175 earthquakes in the USGS global earthquake catalog for the same five-month time period, and no accurate location or magnitude can be assigned based on tweet data alone. However, Twitter earthquake detections are not without merit. The detections are generally caused by widely felt events that are of more immediate interest than those with no human impact. The detections are also fast; about 75% occur within two minutes of the origin time. This is considerably faster than seismographic detections in poorly instrumented regions of the world. The tweets triggering the detections also provided very short first-impression narratives from people who experienced the shaking.

  18. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  19. Coulomb Failure Stress Accumulation in Nepal After the 2015 Mw 7.8 Gorkha Earthquake: Testing Earthquake Triggering Hypothesis and Evaluating Seismic Hazards

    Science.gov (United States)

    Xiong, N.; Niu, F.

    2017-12-01

    A Mw 7.8 earthquake struck Gorkha, Nepal, on April 5, 2015, resulting in more than 8000 deaths and 3.5 million homeless. The earthquake initiated 70km west of Kathmandu and propagated eastward, rupturing an area of approximately 150km by 60km in size. However, the earthquake failed to fully rupture the locked fault beneath the Himalaya, suggesting that the region south of Kathmandu and west of the current rupture are still locked and a much more powerful earthquake might occur in future. Therefore, the seismic hazard of the unruptured region is of great concern. In this study, we investigated the Coulomb failure stress (CFS) accumulation on the unruptured fault transferred by the Gorkha earthquake and some nearby historical great earthquakes. First, we calculated the co-seismic CFS changes of the Gorkha earthquake on the nodal planes of 16 large aftershocks to quantitatively examine whether they were brought closer to failure by the mainshock. It is shown that at least 12 of the 16 aftershocks were encouraged by an increase of CFS of 0.1-3 MPa. The correspondence between the distribution of off-fault aftershocks and the increased CFS pattern also validates the applicability of the earthquake triggering hypothesis in the thrust regime of Nepal. With the validation as confidence, we calculated the co-seismic CFS change on the locked region imparted by the Gorkha earthquake and historical great earthquakes. A newly proposed ramp-flat-ramp-flat fault geometry model was employed, and the source parameters of historical earthquakes were computed with the empirical scaling relationship. A broad region south of the Kathmandu and west of the current rupture were shown to be positively stressed with CFS change roughly ranging between 0.01 and 0.5 MPa. The maximum of CFS increase (>1MPa) was found in the updip segment south of the current rupture, implying a high seismic hazard. Since the locked region may be additionally stressed by the post-seismic relaxation of the lower

  20. Distributions of households by size: differences and trends.

    Science.gov (United States)

    Kuznets, S

    1982-01-01

    "This article deals with the distributions of households by size, that is, by number of persons, as they are observed in international comparisons, and for fewer countries, over time." The contribution of differentials in household size to inequality in income distribution among persons and households is discussed. Data are for both developed and developing countries. excerpt

  1. Fissure formation in coke. 3: Coke size distribution and statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    D.R. Jenkins; D.E. Shaw; M.R. Mahoney [CSIRO, North Ryde, NSW (Australia). Mathematical and Information Sciences

    2010-07-15

    A model of coke stabilization, based on a fundamental model of fissuring during carbonisation is used to demonstrate the applicability of the fissuring model to actual coke size distributions. The results indicate that the degree of stabilization is important in determining the size distribution. A modified form of the Weibull distribution is shown to provide a better representation of the whole coke size distribution compared to the Rosin-Rammler distribution, which is generally only fitted to the lump coke. A statistical analysis of a large number of experiments in a pilot scale coke oven shows reasonably good prediction of the coke mean size, based on parameters related to blend rank, amount of low rank coal, fluidity and ash. However, the prediction of measures of the spread of the size distribution is more problematic. The fissuring model, the size distribution representation and the statistical analysis together provide a comprehensive capability for understanding and predicting the mean size and distribution of coke lumps produced during carbonisation. 12 refs., 16 figs., 4 tabs.

  2. Earthquake recurrence models fail when earthquakes fail to reset the stress field

    Science.gov (United States)

    Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L.

    2012-01-01

    Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

  3. A study of particle size distribution in zirconia-alumina powders

    International Nuclear Information System (INIS)

    Ramakrishnan, K.N.; Venkadesan, S.; Nagarajan, R.

    1996-01-01

    Powder particles, in general are characterized in terms of particle size, size distributions and composition for reasons associated with manufacturing problem based upon product quality, manufacturing convenience, cost and product handling convenience. Particle size analysis or the measurement of particle size distribution is a common effort in any physical, chemical or mechanical processes. This information and processing methods are intricate factors that relate to material behavior and/or physical properties of the fabricated product. The requirements for the formation of a product of particulate solids and its strength varies as the particle size and the size distribution changes. Also the transport properties and the chemical activity are related to the particle size and the size distribution. The choice of a distribution to represent a physical system is generally motivated by an understanding of the nature of underlying phenomenon and is verified by the available data. After a model has been chosen, its parameter must be determined. The reasonableness of a selected model on the basis of given data is especially important when the model is to be used for prediction. Two different approaches in this problem are probability plotting and statistical tests

  4. Distribution Of Natural Radioactivity On Soil Size Particles

    International Nuclear Information System (INIS)

    Tran Van Luyen; Trinh Hoai Vinh; Thai Khac Dinh

    2008-01-01

    This report presents a distribution of natural radioactivity on different soil size particles, taken from one soil profile. On the results shows a range from 52% to 66% of natural radioisotopes such as 238 U, 232 Th, 226 Ra and 40 K concentrated on the soil particles below 40 micrometers in diameter size. The remained of natural radioisotopes were distributed on a soil particles with higher diameter size. The study is available for soil sample collected to natural radioactive analyze by gamma and alpha spectrometer methods. (author)

  5. The effect of complex fault rupture on the distribution of landslides triggered by the 12 January 2010, Haiti earthquake

    Science.gov (United States)

    Harp, Edwin L.; Jibson, Randall W.; Dart, Richard L.; Margottini, Claudio; Canuti, Paolo; Sassa, Kyoji

    2013-01-01

    The MW 7.0, 12 January 2010, Haiti earthquake triggered more than 7,000 landslides in the mountainous terrain south of Port-au-Prince over an area that extends approximately 50 km to the east and west from the epicenter and to the southern coast. Most of the triggered landslides were rock and soil slides from 25°–65° slopes within heavily fractured limestone and deeply weathered basalt and basaltic breccia. Landslide volumes ranged from tens of cubic meters to several thousand cubic meters. Rock slides in limestone typically were 2–5 m thick; slides within soils and weathered basalt typically were less than 1 m thick. Twenty to thirty larger landslides having volumes greater than 10,000 m3 were triggered by the earthquake; these included block slides and rotational slumps in limestone bedrock. Only a few landslides larger than 5,000 m3 occurred in the weathered basalt. The distribution of landslides is asymmetric with respect to the fault source and epicenter. Relatively few landslides were triggered north of the fault source on the hanging wall. The densest landslide concentrations lie south of the fault source and the Enriquillo-Plantain-Garden fault zone on the footwall. Numerous landslides also occurred along the south coast west of Jacmél. This asymmetric distribution of landsliding with respect to the fault source is unusual given the modeled displacement of the fault source as mainly thrust motion to the south on a plane dipping to the north at approximately 55°; landslide concentrations in other documented thrust earthquakes generally have been greatest on the hanging wall. This apparent inconsistency of the landslide distribution with respect to the fault model remains poorly understood given the lack of any strong-motion instruments within Haiti during the earthquake.

  6. Modeling earthquake magnitudes from injection-induced seismicity on rough faults

    Science.gov (United States)

    Maurer, J.; Dunham, E. M.; Segall, P.

    2017-12-01

    It is an open question whether perturbations to the in-situ stress field due to fluid injection affect the magnitudes of induced earthquakes. It has been suggested that characteristics such as the total injected fluid volume control the size of induced events (e.g., Baisch et al., 2010; Shapiro et al., 2011). On the other hand, Van der Elst et al. (2016) argue that the size distribution of induced earthquakes follows Gutenberg-Richter, the same as tectonic events. Numerical simulations support the idea that ruptures nucleating inside regions with high shear-to-effective normal stress ratio may not propagate into regions with lower stress (Dieterich et al., 2015; Schmitt et al., 2015), however, these calculations are done on geometrically smooth faults. Fang & Dunham (2013) show that rupture length on geometrically rough faults is variable, but strongly dependent on background shear/effective normal stress. In this study, we use a 2-D elasto-dynamic rupture simulator that includes rough fault geometry and off-fault plasticity (Dunham et al., 2011) to simulate earthquake ruptures under realistic conditions. We consider aggregate results for faults with and without stress perturbations due to fluid injection. We model a uniform far-field background stress (with local perturbations around the fault due to geometry), superimpose a poroelastic stress field in the medium due to injection, and compute the effective stress on the fault as inputs to the rupture simulator. Preliminary results indicate that even minor stress perturbations on the fault due to injection can have a significant impact on the resulting distribution of rupture lengths, but individual results are highly dependent on the details of the local stress perturbations on the fault due to geometric roughness.

  7. Determination of size distribution function

    International Nuclear Information System (INIS)

    Teshome, A.; Spartakove, A.

    1987-05-01

    The theory of a method is outlined which gives the size distribution function (SDF) of a polydispersed system of non-interacting colloidal and microscopic spherical particles, having sizes in the range 0-10 -5 cm., from a gedanken experimental scheme. It is assumed that the SDF is differentiable and the result is obtained for rotational frequency in the order of 10 3 (sec) -1 . The method may be used independently, but is particularly useful in conjunction with an alternate method described in a preceding paper. (author). 8 refs, 2 figs

  8. Large magnitude earthquakes on the Awatere Fault, Marlborough

    International Nuclear Information System (INIS)

    Mason, D.P.M.; Little, T.A.; Van Dissen, R.J.

    2006-01-01

    The Awatere Fault is a principal active strike-slip fault within the Marlborough fault system, and last ruptured in October 1848, in the M w ∼7.5 Marlborough earthquake. The coseismic slip distribution and maximum traceable length of this rupture are calculated from the magnitude and distribution of small, metre-scale geomorphic displacements attributable to this earthquake. These data suggest this event ruptured ∼110 km of the fault, with mean horizontal surface displacement of 5.3 ± 1.6m. Based on these parameters, the moment magnitude of this earthquake would be M w ∼7.4-7.7. Paeloseismic trenching investigations along the eastern section reveal evidence for at least eight, and possibly ten, surface-rupturing paleoearthquakes in the last 8600 years, including the 1848 rupture. The coseismic slip distribution and rupture length of the 1848 earthquake, in combination with the paleoearthquake age data, suggest the eastern section of the Awatere Fault ruptures in M w ∼7.5 earthquakes, with over 5 m of surface displacement, every 860-1080 years. (author). 21 refs., 10 figs., 7 tabs

  9. Production, depreciation and the size distribution of firms

    Science.gov (United States)

    Ma, Qi; Chen, Yongwang; Tong, Hui; Di, Zengru

    2008-05-01

    Many empirical researches indicate that firm size distributions in different industries or countries exhibit some similar characters. Among them the fact that many firm size distributions obey power-law especially for the upper end has been mostly discussed. Here we present an agent-based model to describe the evolution of manufacturing firms. Some basic economic behaviors are taken into account, which are production with decreasing marginal returns, preferential allocation of investments, and stochastic depreciation. The model gives a steady size distribution of firms which obey power-law. The effect of parameters on the power exponent is analyzed. The theoretical results are given based on both the Fokker-Planck equation and the Kesten process. They are well consistent with the numerical results.

  10. Size distributions of micro-bubbles generated by a pressurized dissolution method

    Science.gov (United States)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble

  11. Experimental investigation of particle size distribution influence on diffusion controlled coarsening

    International Nuclear Information System (INIS)

    Fang, Zhigang; Patterson, B.R.

    1993-01-01

    The influence of initial particle size distribution on coarsening during liquid phase sintering has been experimentally investigated using W-14Ni-6Fe alloy as a model system. It was found that initially wider size distribution particles coarsened more rapidly than those of an initially narrow distribution. The well known linear relationship between the cube of the average particle radius bar r -3 , and time was observed for most of the coarsening process, although the early stage coarsening rate constant changed with time, as expected with concomitant early changes in the tungsten particle size distribution. The instantaneous transient rate constant was shown to be related to the geometric standard deviation, 1nσ, of the instantaneous size distributions, with higher rate constants corresponding to larger 1nσ values. The form of the particle size distributions changed rapidly during early coarsening and reached a quasi-stable state, different from the theoretical asymptotic distribution, after some time. A linear relationship was found between the experimentally observed instantaneous rate constant and that computed from an earlier model incorporating the effect of particle size distribution. The above results compare favorably with those from prior theoretical modeling and computer simulation studies of the effect of particle size distribution on coarsening, based on the DeHoff communicating neighbor model

  12. Concentration and size distribution of particles in abstracted groundwater

    NARCIS (Netherlands)

    Van Beek, C.G.E.M.; de Zwart, A.H.; Balemans, M.; Kooiman, J.W.; van Rosmalen, C.; Timmer, H.; Vandersluys, J.; Stuijfzand, P.J.

    2010-01-01

    Particle number concentrations have been counted and particle size distributions calculated in groundwater derived by abstraction wells. Both concentration and size distribution are governed by the discharge rate: the higher this rate the higher the concentration and the higher the proportion of

  13. Investigation of influence of falling rock size and shape on traveling distance due to earthquake

    International Nuclear Information System (INIS)

    Tochigi, Hitoshi

    2010-01-01

    In evaluation of seismic stability of surrounding slope in a nuclear power plant, as a part of residual risk evaluation, it is essential to confirm the effects of surrounding slope failure on a important structure, when slope failure probability is not sufficiently small for extremely large earthquake. So evaluation of slope failure potential based on a falling rocks analyses considering slope failure using discontinuous model such as distinct element method(DEM) will be employed in near future. But, these slope collapse analysis by discontinuous model needs determination of input data of falling rock size and shape, and some problems about determination method of these size and shape condition and analysis accuracy are remained. In this study, the results of slope collapse experiment by shaking table and numerical simulation of this experiment by DEM is conducted to clarify the influence of falling rock size and shape on traveling distance. As a results, it is indicated that more massive and larger rock model gives safety side evaluation for traveling distance. (author)

  14. Earthquake occurrence as stochastic event: (1) theoretical models

    Energy Technology Data Exchange (ETDEWEB)

    Basili, A.; Basili, M.; Cagnetti, V.; Colombino, A.; Jorio, V.M.; Mosiello, R.; Norelli, F.; Pacilio, N.; Polinari, D.

    1977-01-01

    The present article intends liaisoning the stochastic approach in the description of earthquake processes suggested by Lomnitz with the experimental evidence reached by Schenkova that the time distribution of some earthquake occurrence is better described by a Negative Bionomial distribution than by a Poisson distribution. The final purpose of the stochastic approach might be a kind of new way for labeling a given area in terms of seismic risk.

  15. Body size distributions signal a regime shift in a lake ...

    Science.gov (United States)

    Communities of organisms, from mammals to microorganisms, have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at multiple spatial and temporal scales. In this study, we assessed whether body size patterns serve as an indicator of a threshold between alternative regimes. Over the past 7000 years, the biological communities of Foy Lake (Montana,USA) have undergone a major regime shift owing to climate change. We used a palaeoecological record of diatom communities to estimate diatom sizes, and then analysed the discontinuous distribution of organism sizes over time. We used Bayesian classification and regression tree models to determine that all time intervals exhibited aggregations of sizes separated by gaps in the distribution and found a significant change in diatom body size distributions approximately 150 years before the identified ecosystem regime shift. We suggest that discontinuity analysis is a useful addition to the suite of tools for the detection of early warning signals of regime shifts. Communities of organisms from mammals to microorganisms have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at discrete spatial and temporal scales within ecosystems. Here, a paleoecological record of diatom community change is use

  16. Self-Organized Criticality in an Anisotropic Earthquake Model

    Science.gov (United States)

    Li, Bin-Quan; Wang, Sheng-Jun

    2018-03-01

    We have made an extensive numerical study of a modified model proposed by Olami, Feder, and Christensen to describe earthquake behavior. Two situations were considered in this paper. One situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly not averagely and keeps itself to zero. The other situation is that the energy of the unstable site is redistributed to its nearest neighbors randomly and keeps some energy for itself instead of reset to zero. Different boundary conditions were considered as well. By analyzing the distribution of earthquake sizes, we found that self-organized criticality can be excited only in the conservative case or the approximate conservative case in the above situations. Some evidence indicated that the critical exponent of both above situations and the original OFC model tend to the same result in the conservative case. The only difference is that the avalanche size in the original model is bigger. This result may be closer to the real world, after all, every crust plate size is different. Supported by National Natural Science Foundation of China under Grant Nos. 11675096 and 11305098, the Fundamental Research Funds for the Central Universities under Grant No. GK201702001, FPALAB-SNNU under Grant No. 16QNGG007, and Interdisciplinary Incubation Project of SNU under Grant No. 5

  17. Optimal placement and sizing of multiple distributed generating units in distribution

    Directory of Open Access Journals (Sweden)

    D. Rama Prabha

    2016-06-01

    Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.

  18. Characteristics of Gyeongju earthquake, moment magnitude 5.5 and relative relocations of aftershocks

    Science.gov (United States)

    Cho, ChangSoo; Son, Minkyung

    2017-04-01

    There is low seismicity in the korea peninsula. According historical record in the historic book, There were several strong earthquake in the korea peninsula. Especially in Gyeongju of capital city of the Silla dynasty, few strong earthquakes caused the fatalities of several hundreds people 1,300 years ago and damaged the houses and make the wall of castles collapsed. Moderate strong earthquake of moment magnitude 5.5 hit the city in September 12, 2016. Over 1000 aftershocks were detected. The numbers of occurrences of aftershock over time follows omori's law well. The distribution of relative locations of 561 events using clustering aftershocks by cross-correlation between P and S waveform of the events showed the strike NNE 25 30 o and dip 68 74o of fault plane to cause the earthquake matched with the fault plane solution of moment tensor inversion well. The depth of range of the events is from 11km to 16km. The width of distribution of event locations is about 5km length. The direction of maximum horizontal stress by inversion of stress for the moment solutions of main event and large aftershocks is similar to the known maximum horizontal stress direction of the korea peninsula. The relation curves between moment magnitude and local magnitude of aftershocks shows that the moment magnitude increases slightly more for events of size less than 2.0

  19. A new stochastic algorithm for inversion of dust aerosol size distribution

    Science.gov (United States)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  20. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  1. Seismicity of Romania: fractal properties of earthquake space, time and energy distributions and their correlation with segmentation of subducted lithosphere and Vrancea seismic source

    International Nuclear Information System (INIS)

    Popescu, E.; Ardeleanu, L.; Bazacliu, O.; Popa, M.; Radulian, M.; Rizescu, M.

    2002-01-01

    For any strategy of seismic hazard assessment, it is important to set a realistic seismic input such as: delimitation of seismogenic zones, geometry of seismic sources, seismicity regime, focal mechanism and stress field. The aim of the present project is a systematic investigation focused on the problem of Vrancea seismic regime at different time, space and energy scales which can offer a crucial information on the seismogenic process of this peculiar seismic area. The departures from linearity of the time, space and energy distributions are associated with inhomogeneities in the subducting slab, rheology, tectonic stress distribution and focal mechanism. The significant variations are correlated with the existence of active and inactive segments along the seismogenic zone, the deviation from linearity of the frequency-magnitude distribution is associated with the existence of different earthquake generation models and the nonlinearities showed in the time series are related with the occurrence of the major earthquakes. Another important purpose of the project is to analyze the main crustal seismic sequences generated on the Romanian territory in the following regions: Ramnicu Sarat, Fagaras-Campulung, Banat. Time, space and energy distributions together with the source parameters and scaling relations are investigated. The analysis of the seismicity and clustering properties of the earthquakes generated in both Vrancea intermediate-depth region and Romanian crustal seismogenic zones, achieved within this project, constitutes the starting point for the study of seismic zoning, seismic hazard and earthquake prediction. The data set consists of Vrancea subcrustal earthquake catalogue (since 1974 and continuously updated) and catalogues with events located in the other crustal seimogenic zones of Romania. To build up these data sets, high-quality information made available through multiple international cooperation programs is considered. The results obtained up to

  2. Earthquake statistics, spatiotemporal distribution of foci and source mechanisms - a key to understanding of the West Bohemia/Vogtland earthquake swarms

    Science.gov (United States)

    Horálek, Josef; Čermáková, Hana; Fischer, Tomáš

    2016-04-01

    Earthquake swarms are sequences of numerous events closely clustered in space and time and do not have a single dominant mainshock. A few of the largest events in a swarm reach similar magnitudes and usually occur throughout the course of the earthquake sequence. These attributes differentiate earthquake swarms from ordinary mainshock-aftershock sequences. Earthquake swarms occur worldwide, in diverse geological units. The swarms typically accompany volcanic activity at margins of the tectonic plate but also occur in intracontinental areas where strain from tectonic-plate movement is small. The origin of earthquake swarms is still unclear. The swarms typically occur at the plate margins but also in intracontinental areas. West Bohemia-Vogtland represents one of the most active intraplate earthquake-swarm areas in Europe. It is characterised by a frequent reoccurrence of ML 2.8 swarm events are located in a few dense clusters which implies step by step rupturing of one or a few asperities during the individual swarms. The source mechanism patters (moment-tensor description, MT) of the individual swarms indicate several families of the mechanisms, which fit well geometry of respective fault segments. MTs of the most events signify pure shears except for the 1997-swarm events the MTs of which indicates a combine sources including both shear and tensile components. The origin of earthquake swarms is still unclear. Nevertheless, we infer that the individual earthquake swarms in West Bohemia-Vogtland are mixture of the mainshock-aftershock sequences which correspond to step by step rupturing of one or a few asperities. The swarms occur on short fault segments with heterogeneous stress and strength, which may be affected by pressurized crustal fluids reducing normal component of the tectonic stress and lower friction. This way critically loaded faults are brought to failure and the swarm activity is driven by the differential local stress.

  3. Velocity Distributions in Inelastic Granular Gases with Continuous Size Distributions

    International Nuclear Information System (INIS)

    Li Rui; Li Zhi-Hao; Zhang Duan-Ming

    2011-01-01

    We study by numerical simulation the property of velocity distributions of granular gases with a power-law size distribution, driven by uniform heating and boundary heating. It is found that the form of velocity distribution is primarily controlled by the restitution coefficient η and q, the ratio between the average number of heatings and the average number of collisions in the system. Furthermore, we show that uniform and boundary heating can be understood as different limits of q, with q ≫ 1 and q ≤ 1, respectively. (general)

  4. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  5. Effects of fuel particle size distributions on neutron transport in stochastic media

    International Nuclear Information System (INIS)

    Liang, Chao; Pavlou, Andrew T.; Ji, Wei

    2014-01-01

    Highlights: • Effects of fuel particle size distributions on neutron transport are evaluated. • Neutron channeling is identified as the fundamental reason for the effects. • The effects are noticeable at low packing and low optical thickness systems. • Unit cells of realistic reactor designs are studied for different size particles. • Fuel particle size distribution effects are not negligible in realistic designs. - Abstract: This paper presents a study of the fuel particle size distribution effects on neutron transport in three-dimensional stochastic media. Particle fuel is used in gas-cooled nuclear reactor designs and innovative light water reactor designs loaded with accident tolerant fuel. Due to the design requirements and fuel fabrication limits, the size of fuel particles may not be perfectly constant but instead follows a certain distribution. This brings a fundamental question to the radiation transport computation community: how does the fuel particle size distribution affect the neutron transport in particle fuel systems? To answer this question, size distribution effects and their physical interpretations are investigated by performing a series of neutron transport simulations at different fuel particle size distributions. An eigenvalue problem is simulated in a cylindrical container consisting of fissile fuel particles with five different size distributions: constant, uniform, power, exponential and Gaussian. A total of 15 parametric cases are constructed by altering the fissile particle volume packing fraction and its optical thickness, but keeping the mean chord length of the spherical fuel particle the same at different size distributions. The tallied effective multiplication factor (k eff ) and the spatial distribution of fission power density along axial and radial directions are compared between different size distributions. At low packing fraction and low optical thickness, the size distribution shows a noticeable effect on neutron

  6. Size distributions of member asteroids in seven Hirayama families

    International Nuclear Information System (INIS)

    Mikami, Takao; Ishida, Keiichi.

    1990-01-01

    The size distributions of asteroids in the seven Hirayama families are studied for newly assigned member asteroids in the diameter range of about 10 to 100 km. The size distributions for the different families are expressed by the power-law functions with distinctly different power-law indices. The power-law indices for families with small mean orbital inclinations are about 2.5 to 3.0. On the other hand, the power-law indices for families with large mean orbital inclinations are significantly smaller than 2.5. This indicates that the smaller asteroids were removed preferentially from these families after their formation. It is thought that the smaller asteroids left behind the families were dispersed into the main belt. It is consistent with the fact that the power-law index for the size distribution of asteroids with diameters smaller than 25 km in the main belt is larger than the power-law indices for the size distributions of asteroids in the families. This segregation due to the asteroid size can be caused by a drag force caused by the ambient matter deposited on the invariable place of the solar system during the early evolutionary stage. (author)

  7. Reconstructing the size distribution of the primordial Main Belt

    Science.gov (United States)

    Tsirvoulis, G.; Morbidelli, A.; Delbo, M.; Tsiganis, K.

    2018-04-01

    In this work we aim to constrain the slope of the size distribution of main-belt asteroids, at their primordial state. To do so we turn out attention to the part of the main asteroid belt between 2.82 and 2.96 AU, the so-called "pristine zone", which has a low number density of asteroids and few, well separated asteroid families. Exploiting these unique characteristics, and using a modified version of the hierarchical clustering method we are able to remove the majority of asteroid family members from the region. The remaining, background asteroids should be of primordial origin, as the strong 5/2 and 7/3 mean-motion resonances with Jupiter inhibit transfer of asteroids to and from the neighboring regions. The size-frequency distribution of asteroids in the size range 17 size distribution slope q = - 1.43 . In addition, applying the same 'family extraction' method to the neighboring regions, i.e. the middle and outer belts, and comparing the size distributions of the respective background populations, we find statistical evidence that no large asteroid families of primordial origin had formed in the middle or pristine zones.

  8. Methods of assessing grain-size distribution during grain growth

    DEFF Research Database (Denmark)

    Tweed, Cherry J.; Hansen, Niels; Ralph, Brian

    1985-01-01

    This paper considers methods of obtaining grain-size distributions and ways of describing them. In order to collect statistically useful amounts of data, an automatic image analyzer is used, and the resulting data are subjected to a series of tests that evaluate the differences between two related...... distributions (before and after grain growth). The distributions are measured from two-dimensional sections, and both the data and the corresponding true three-dimensional grain-size distributions (obtained by stereological analysis) are collected. The techniques described here are illustrated by reference...

  9. Particle Size Distributions in Chondritic Meteorites: Evidence for Pre-Planetesimal Histories

    Science.gov (United States)

    Simon, J. I.; Cuzzi, J. N.; McCain, K. A.; Cato, M. J.; Christoffersen, P. A.; Fisher, K. R.; Srinivasan, P.; Tait, A. W.; Olson, D. M.; Scargle, J. D.

    2018-01-01

    Magnesium-rich silicate chondrules and calcium-, aluminum-rich refractory inclusions (CAIs) are fundamental components of primitive chondritic meteorites. It has been suggested that concentration of these early-formed particles by nebular sorting processes may lead to accretion of planetesimals, the planetary bodies that represent the building blocks of the terrestrial planets. In this case, the size distributions of the particles may constrain the accretion process. Here we present new particle size distribution data for Northwest Africa 5717, a primitive ordinary chondrite (ungrouped 3.05) and the well-known carbonaceous chondrite Allende (CV3). Instead of the relatively narrow size distributions obtained in previous studies (Ebel et al., 2016; Friedrich et al., 2015; Paque and Cuzzi, 1997, and references therein), we observed broad size distributions for all particle types in both meteorites. Detailed microscopic image analysis of Allende shows differences in the size distributions of chondrule subtypes, but collectively these subpopulations comprise a composite "chondrule" size distribution that is similar to the broad size distribution found for CAIs. Also, we find accretionary 'dust' rims on only a subset (approximately 15-20 percent) of the chondrules contained in Allende, which indicates that subpopulations of chondrules experienced distinct histories prior to planetary accretion. For the rimmed subset, we find positive correlation between rim thickness and chondrule size. The remarkable similarity between the size distributions of various subgroups of particles, both with and without fine grained rims, implies a common size sorting process. Chondrite classification schemes, astrophysical disk models that predict a narrow chondrule size population and/or a common localized formation event, and conventional particle analysis methods must all be critically reevaluated. We support the idea that distinct "lithologies" in NWA 5717 are nebular aggregates of

  10. Vibro-spring particle size distribution analyser

    International Nuclear Information System (INIS)

    Patel, Ketan Shantilal

    2002-01-01

    This thesis describes the design and development of an automated pre-production particle size distribution analyser for particles in the 20 - 2000 μm size range. This work is follow up to the vibro-spring particle sizer reported by Shaeri. In its most basic form, the instrument comprises a horizontally held closed coil helical spring that is partly filled with the test powder and sinusoidally vibrated in the transverse direction. Particle size distribution data are obtained by stretching the spring to known lengths and measuring the mass of the powder discharged from the spring's coils. The size of the particles on the other hand is determined from the spring 'intercoil' distance. The instrument developed by Shaeri had limited use due to its inability to measure sample mass directly. For the device reported here, modifications are made to the original configurations to establish means of direct sample mass measurement. The feasibility of techniques for measuring the mass of powder retained within the spring are investigated in detail. Initially, the measurement of mass is executed in-situ from the vibration characteristics based on the spring's first harmonic resonant frequency. This method is often erratic and unreliable due to the particle-particle-spring wall interactions and the spring bending. An much more successful alternative is found from a more complicated arrangement in which the spring forms part of a stiff cantilever system pivoted along its main axis. Here, the sample mass is determined in the 'static mode' by monitoring the cantilever beam's deflection following the wanton termination of vibration. The system performance has been optimised through the variations of the mechanical design of the key components and the operating procedure as well as taking into account the effect of changes in the ambient temperature on the system's response. The thesis also describes the design and development of the ancillary mechanisms. These include the pneumatic

  11. Low cost earthquake resistant ferrocement small house

    International Nuclear Information System (INIS)

    Saleem, M.A.; Ashraf, M.; Ashraf, M.

    2008-01-01

    The greatest humanitarian challenge faced even today after one year of Kashmir Hazara earthquake is that of providing shelter. Currently on the globe one in seven people live in a slum or refugee camp. The earthquake of October 2005 resulted in a great loss of life and property. This research work is mainly focused on developing a design of small size, low cost and earthquake resistant house. Ferrocement panels are recommended as the main structural elements with lightweight truss roofing system. Earthquake resistance is ensured by analyzing the structure on ETABS for a seismic activity of zone 4. The behavior of structure is found satisfactory under the earthquake loading. An estimate of cost is also presented which shows that it is an economical solution. (author)

  12. Radioactive Aerosol Size Distribution Measured in Nuclear Workplaces

    International Nuclear Information System (INIS)

    Kravchik, T.; Oved, S.; German, U.

    2002-01-01

    Inhalation is the main route for internal exposure of workers to radioactive aerosols in the nuclear industry.Aerosol's size distribution and in particular its activity median aerodynamic diameter (AMAD)is important for determining the fractional deposition of inhaled particles in the respiratory tract and the resulting doses. Respiratory tract models have been published by the International Commission on radiological Protection (ICRP).The former model has recommended a default AMAD of 1 micron for the calculation of dose coefficients for workers in the nuclear industry [1].The recent model recommends a 5 microns default diameter for occupational exposure which is considered to be more representative of workplace aerosols [2]. Several researches on radioactive aerosol's size distribution in nuclear workplaces has supported this recommendation [3,4].This paper presents the results of radioactive aerosols size distribution measurements taken at several workplaces of the uranium production process

  13. [Medium- and long-term health effects of the L'Aquila earthquake (Central Italy, 2009) and of other earthquakes in high-income Countries: a systematic review].

    Science.gov (United States)

    Ripoll Gallardo, Alba; Alesina, Marta; Pacelli, Barbara; Serrone, Dario; Iacutone, Giovanni; Faggiano, Fabrizio; Della Corte, Francesco; Allara, Elias

    2016-01-01

    to compare the methodological characteristics of the studies investigating the middle- and long-term health effects of the L'Aquila earthquake with the features of studies conducted after other earthquakes occurred in highincome Countries. a systematic comparison between the studies which evaluated the health effects of the L'Aquila earthquake (Central Italy, 6th April 2009) and those conducted after other earthquakes occurred in comparable settings. Medline, Scopus, and 6 sources of grey literature were systematically searched. Inclusion criteria comprised measurement of health outcomes at least one month after the earthquake, investigation of earthquakes occurred in high-income Countries, and presence of at least one temporal or geographical control group. out of 2,976 titles, 13 studies regarding the L'Aquila earthquake and 51 studies concerning other earthquakes were included. The L'Aquila and the Kobe/Hanshin- Awaji (Japan, 17th January 1995) earthquakes were the most investigated. Studies on the L'Aquila earthquake had a median sample size of 1,240 subjects, a median duration of 24 months, and used most frequently a cross sectional design (7/13). Studies on other earthquakes had a median sample size of 320 subjects, a median duration of 15 months, and used most frequently a time series design (19/51). the L'Aquila studies often focussed on mental health, while the earthquake effects on mortality, cardiovascular outcomes, and health systems were less frequently evaluated. A more intensive use of routine data could benefit future epidemiological surveillance in the aftermath of earthquakes.

  14. Earthquake geology of the Bulnay Fault (Mongolia)

    Science.gov (United States)

    Rizza, Magali; Ritz, Jean-Franciois; Prentice, Carol S.; Vassallo, Ricardo; Braucher, Regis; Larroque, Christophe; Arzhannikova, A.; Arzhanikov, S.; Mahan, Shannon; Massault, M.; Michelot, J-L.; Todbileg, M.

    2015-01-01

    The Bulnay earthquake of July 23, 1905 (Mw 8.3-8.5), in north-central Mongolia, is one of the world's largest recorded intracontinental earthquakes and one of four great earthquakes that occurred in the region during the 20th century. The 375-km-long surface rupture of the left-lateral, strike-slip, N095°E trending Bulnay Fault associated with this earthquake is remarkable for its pronounced expression across the landscape and for the size of features produced by previous earthquakes. Our field observations suggest that in many areas the width and geometry of the rupture zone is the result of repeated earthquakes; however, in those areas where it is possible to determine that the geomorphic features are the result of the 1905 surface rupture alone, the size of the features produced by this single earthquake are singular in comparison to most other historical strike-slip surface ruptures worldwide. Along the 80 km stretch, between 97.18°E and 98.33°E, the fault zone is characterized by several meters width and the mean left-lateral 1905 offset is 8.9 ± 0.6 m with two measured cumulative offsets that are twice the 1905 slip. These observations suggest that the displacement produced during the penultimate event was similar to the 1905 slip. Morphotectonic analyses carried out at three sites along the eastern part of the Bulnay fault, allow us to estimate a mean horizontal slip rate of 3.1 ± 1.7 mm/yr over the Late Pleistocene-Holocene period. In parallel, paleoseismological investigations show evidence for two earthquakes prior to the 1905 event with recurrence intervals of ~2700-4000 years.

  15. Wenchuan Ms8.0 earthquake coseismic slip distribution inversion

    Directory of Open Access Journals (Sweden)

    Hongbo Tan

    2015-05-01

    Full Text Available By using GPS and gravity data before and after the Wenchuan Ms8.0 earthquake and combining data from geological surveys and geophysical inversion studies, an initial coseismic fault model is constructed. The dip angle changes of the fault slip distribution on the fault plane are inversed, and the inversion results show that the shape of the fault resembles a double-shovel. The Yingxiu–Beichuan Fault is approximately 330 km long, the surface fault dip angle is 65.1°, which gradually reduces with increasing depth to 0° at the detachment layer at a depth of 19.62 km. The Guanxian–Jiangyou Fault is approximately 90 km long, and its dip angle at the surface is 55.3°, which gradually reduces with increasing depth; the fault joins the Yingxiu–Beichuan Fault at 13.75 km. Coseismic slip mainly occurs above a depth of 19 km. There are five concentrated rupture areas, Yingxiu, Wenchuan, Hanwang, Beichuan, and Pingwu, which are consistent with geological survey results and analyses of the aftershock distribution. The rupture mainly has a thrust component with a small dextral strike–slip component. The maximum slip was more than 10 m, which occurred near Beichuan and Hanwang. The seismic moment is 7.84 × 1020 Nm (Mw7.9, which is consistent with the seismological results.

  16. Landslides triggered by the 3 August 2014 Ludian earthquake in China: geological properties, geomorphologic characteristics and spatial distribution analysis

    Directory of Open Access Journals (Sweden)

    Jia-Wen Zhou

    2016-07-01

    Full Text Available On 3 August 2014, an earthquake of Mw 6.5 happened in Ludian County, Yunnan Province, China. This earthquake triggered hundreds of landslides of various types, dominated by shallow slides, deep-seated slides, rock falls, debris flow and unstable slopes. Using field investigations and remote sensing images, 413 landslides triggered by the Ludian earthquake were statistically analyzed. Statistical analyses show that most of the landslides are shallow slides with a small volume. Most of these landslides are concentrated near the epicentre with distances ranging from 6–12 km, especially at the upper slope along the river valley. The number of landslides increased with increasing distance from the epicentre (0–9 km and then decreased with increasing distance from the epicentre (>9 km. The landslides decreased in density with increasing distance from the fault rupture. More than 70% of the landslides occurred on the right side of the Xiyuhe-Zhaotong fault, when viewed from Southwest (SW to Northeast (NE. Slope aspect and gradient had a substantial influence on the landslide distribution and landslide density increased with increasing slope gradient. Approximately, 65% of the landslides happened at the back slope with respect to the earthquake epicentre.

  17. Temporal and spatial distributions of precursory seismicity rate changes in the Thailand-Laos-Myanmar border region: implication for upcoming hazardous earthquakes

    Science.gov (United States)

    Puangjaktha, Prayot; Pailoplee, Santi

    2018-01-01

    To study the prospective areas of upcoming strong-to-major earthquakes, i.e., M w ≥ 6.0, a catalog of seismicity in the vicinity of the Thailand-Laos-Myanmar border region was generated and then investigated statistically. Based on the successful investigations of previous works, the seismicity rate change (Z value) technique was applied in this study. According to the completeness earthquake dataset, eight available case studies of strong-to-major earthquakes were investigated retrospectively. After iterative tests of the characteristic parameters concerning the number of earthquakes ( N) and time window ( T w ), the values of 50 and 1.2 years, respectively, were found to reveal an anomalous high Z-value peak (seismic quiescence) prior to the occurrence of six out of the eight major earthquake events studied. In addition, the location of the Z-value anomalies conformed fairly well to the epicenters of those earthquakes. Based on the investigation of correlation coefficient and the stochastic test of the Z values, the parameters used here ( N = 50 events and T w = 1.2 years) were suitable to determine the precursory Z value and not random phenomena. The Z values of this study and the frequency-magnitude distribution b values of a previous work both highlighted the same prospective areas that might generate an upcoming major earthquake: (i) some areas in the northern part of Laos and (ii) the eastern part of Myanmar.

  18. Global earthquake fatalities and population

    Science.gov (United States)

    Holzer, Thomas L.; Savage, James C.

    2013-01-01

    Modern global earthquake fatalities can be separated into two components: (1) fatalities from an approximately constant annual background rate that is independent of world population growth and (2) fatalities caused by earthquakes with large human death tolls, the frequency of which is dependent on world population. Earthquakes with death tolls greater than 100,000 (and 50,000) have increased with world population and obey a nonstationary Poisson distribution with rate proportional to population. We predict that the number of earthquakes with death tolls greater than 100,000 (50,000) will increase in the 21st century to 8.7±3.3 (20.5±4.3) from 4 (7) observed in the 20th century if world population reaches 10.1 billion in 2100. Combining fatalities caused by the background rate with fatalities caused by catastrophic earthquakes (>100,000 fatalities) indicates global fatalities in the 21st century will be 2.57±0.64 million if the average post-1900 death toll for catastrophic earthquakes (193,000) is assumed.

  19. Earthquake-induced liquefaction in Ferland, Quebec

    International Nuclear Information System (INIS)

    Tuttle, M.; Seeber, L.

    1991-02-01

    Detailed geological investigations are under way at a number of liquefaction sites in the Ferland-Boilleau valley, Quebec, where sand boils, ground cracks and liquefaction-related damages to homes were documented immediately following the Ms=6.0, Mblg=6.5 Saguenay earthquake of November 25, 1988. To date, results obtained from these subsurface investigations of sand boils at two sites in Ferland, located about 26 km from the epicentre, indicate that: the Saguenay earthquake induced liquefaction in late-Pleistocene and Holocene sediments which was recorded as sand dikes, sills and vents in near-surface sediments and soils; earthquake-induced liquefaction and ground failure have occurred in this area at least three times in the past 10,000 years; and, the size and morphology of liquefaction features and the liquefaction susceptibility of source layers of the features may be indicative of the intensity of ground shaking. These preliminary results are very promising and suggest that with continued research liquefaction features will become a useful tool in glaciated terrains, such as northeastern North America, for determining not only the timing and location but also the size of past earthquakes

  20. Analysis of pre-earthquake ionospheric anomalies before the global M = 7.0+ earthquakes in 2010

    Directory of Open Access Journals (Sweden)

    W. F. Peng

    2012-03-01

    Full Text Available The pre-earthquake ionospheric anomalies that occurred before the global M = 7.0+ earthquakes in 2010 are investigated using the total electron content (TEC from the global ionosphere map (GIM. We analyze the possible causes of the ionospheric anomalies based on the space environment and magnetic field status. Results show that some anomalies are related to the earthquakes. By analyzing the time of occurrence, duration, and spatial distribution of these ionospheric anomalies, a number of new conclusions are drawn, as follows: earthquake-related ionospheric anomalies are not bound to appear; both positive and negative anomalies are likely to occur; and the earthquake-related ionospheric anomalies discussed in the current study occurred 0–2 days before the associated earthquakes and in the afternoon to sunset (i.e. between 12:00 and 20:00 local time. Pre-earthquake ionospheric anomalies occur mainly in areas near the epicenter. However, the maximum affected area in the ionosphere does not coincide with the vertical projection of the epicenter of the subsequent earthquake. The directions deviating from the epicenters do not follow a fixed rule. The corresponding ionospheric effects can also be observed in the magnetically conjugated region. However, the probability of the anomalies appearance and extent of the anomalies in the magnetically conjugated region are smaller than the anomalies near the epicenter. Deep-focus earthquakes may also exhibit very significant pre-earthquake ionospheric anomalies.

  1. Single-size thermometric measurements on a size distribution of neutral fullerenes.

    Science.gov (United States)

    Cauchy, C; Bakker, J M; Huismans, Y; Rouzée, A; Redlich, B; van der Meer, A F G; Bordas, C; Vrakking, M J J; Lépine, F

    2013-05-10

    We present measurements of the velocity distribution of electrons emitted from mass-selected neutral fullerenes, performed at the intracavity free electron laser FELICE. We make use of mass-specific vibrational resonances in the infrared domain to selectively heat up one out of a distribution of several fullerene species. Efficient energy redistribution leads to decay via thermionic emission. Time-resolved electron kinetic energy distributions measured give information on the decay rate of the selected fullerene. This method is generally applicable to all neutral species that exhibit thermionic emission and provides a unique tool to study the stability of mass-selected neutral clusters and molecules that are only available as part of a size distribution.

  2. Development of optimization-based probabilistic earthquake scenarios for the city of Tehran

    Science.gov (United States)

    Zolfaghari, M. R.; Peyghaleh, E.

    2016-01-01

    This paper presents the methodology and practical example for the application of optimization process to select earthquake scenarios which best represent probabilistic earthquake hazard in a given region. The method is based on simulation of a large dataset of potential earthquakes, representing the long-term seismotectonic characteristics in a given region. The simulation process uses Monte-Carlo simulation and regional seismogenic source parameters to generate a synthetic earthquake catalogue consisting of a large number of earthquakes, each characterized with magnitude, location, focal depth and fault characteristics. Such catalogue provides full distributions of events in time, space and size; however, demands large computation power when is used for risk assessment, particularly when other sources of uncertainties are involved in the process. To reduce the number of selected earthquake scenarios, a mixed-integer linear program formulation is developed in this study. This approach results in reduced set of optimization-based probabilistic earthquake scenario, while maintaining shape of hazard curves and full probabilistic picture by minimizing the error between hazard curves driven by full and reduced sets of synthetic earthquake scenarios. To test the model, the regional seismotectonic and seismogenic characteristics of northern Iran are used to simulate a set of 10,000-year worth of events consisting of some 84,000 earthquakes. The optimization model is then performed multiple times with various input data, taking into account probabilistic seismic hazard for Tehran city as the main constrains. The sensitivity of the selected scenarios to the user-specified site/return period error-weight is also assessed. The methodology could enhance run time process for full probabilistic earthquake studies like seismic hazard and risk assessment. The reduced set is the representative of the contributions of all possible earthquakes; however, it requires far less

  3. Simulation of the measure of the microparticle size distribution in two dimensions

    International Nuclear Information System (INIS)

    Lameiras, F.S.; Silva Neto, P.P. da

    1987-01-01

    For the nuclear ceramic industry, the determination of the porous size distribution is very important to predict the dimensional thermal stability of uranium dioxide sintered pellets. The determination of the grain size distribution is still very important to predict the operation behavior of these pellets, as well as to control the fabrication process. The Saltykov method is commonly used to determine the microparticles size distribution. A simulation for two-dimensions, using this method and the size distribution of cords to calculate the area distribution [pt

  4. Implication of conjugate faulting in the earthquake brewing and originating process

    Energy Technology Data Exchange (ETDEWEB)

    Jones, L.M. (Massachusetts Inst. of Tech., Cambridge); Deng, Q.; Jiang, P.

    1980-03-01

    The earthquake sequence, precursory and geologo-structural background of the Haicheng, Tangshan, Songpan-Pingwu earthquakes are discussed in this article. All of these earthquakes occurred in a seismic zone controlled by the main boundary faults of an intraplate fault block. However, the fault plane of a main earthquake does not consist of the same faults, but is rather a related secondary fault. They formed altogether a conjugate shearing rupture zone under the action of a regional tectonic stress field. As to the earthquake sequence, the foreshocks and aftershocks might occur on the conjugate fault planes within an epicentral region rather than be limited to the fault plane of a main earthquake, such as the distribution of foreshocks and aftershocks of the Haicheng earthquake. The characteristics of the long-, medium-, and imminent-term earthquake precursory anomalies of the three mentioned earthquakes, especially the character of well-studies anomaly phenomena in electrical resistivity, radon emission, groundwater and animal behavior, have been investigated. The studies of these earthquake precursors show that they were distributed in an area rather more extensive than the epicentral region. Some fault zones in the conjugate fault network usually appeared as distributed belts or concentrated zones of earthquake precursory anomalies, and can be traced in the medium-long term precursory field, but seem more distinct in the short-imminent term precursory anomalous field. These characteristics can be explained by the rupture and sliding originating along the conjugate shear network and the concentration of stress in the regional stress field.

  5. Investigating landslides caused by earthquakes - A historical review

    Science.gov (United States)

    Keefer, D.K.

    2002-01-01

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated with earthquake-induced landslides. This paper traces the historical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquake are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession of post-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing "retrospective" analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, synthesis of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  6. Effects of grain size distribution on the interstellar dust mass growth

    OpenAIRE

    Hirashita, Hiroyuki; Kuo, Tzu-Ming

    2011-01-01

    Grain growth by the accretion of metals in interstellar clouds (called `grain growth') could be one of the dominant processes that determine the dust content in galaxies. The importance of grain size distribution for the grain growth is demonstrated in this paper. First, we derive an analytical formula that gives the grain size distribution after the grain growth in individual clouds for any initial grain size distribution. The time-scale of the grain growth is very sensitive to grain size di...

  7. Cell-size distribution in epithelial tissue formation and homeostasis.

    Science.gov (United States)

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. © 2017 The Author(s).

  8. Tectonic Divisions Based on Gravity Data and Earthquake Distribution Characteristics in the North South Seismic Belt, China

    Science.gov (United States)

    Tian, T.; Zhang, J.; Jiang, W.

    2017-12-01

    The North South Seismic Belt is located in the middle of China, and this seismic belt can be divided into 12 tectonic zones, including the South West Yunnan (I), the Sichuan Yunnan (II), the Qiang Tang (III), the Bayan Har (IV), the East Kunlun Qaidam (V), the Qi Lian Mountain (VI), the Tarim(VII), the East Alashan (VIII), the East Sichuan (IX), the Ordos(X), the Middle Yangtze River (XI) and the Edge of Qinghai Tibet Block (XII) zone. Based on the Bouguer Gravity data calculated from the EGM2008 model, the Euler deconvolution was used to obtain the edge of tectonic zone to amend the traditional tectonic divisions. In every tectonic zone and the whole research area, the logarithm of the total energy of seismic was calculated. The Time Series Analysis (TSA) for all tectonic zones and the whole area were progressed in R, and 12 equal divisions were made (A1-3, B1-3, C1-3, D1-3) by latitude and longitude as a control group. A simple linear trend fitting of time was used, and the QQ figure was used to show the residual distribution features. Among the zones according to Gravity anomalies, I, II and XII show similar statistical characteristic, with no earthquake free year (on which year there was no earthquake in the zone), and it shows that the more seismic activity area is more similar in statistical characteristic as the large area, no matter how large the zone is or how many earthquakes are in the zone. Zone IV, V, IX, III, VII and VIII show one or several seismic free year during 1970s (IV, V and IX) and 1980s (III, VII and VIII), which may implicate the earthquake activity were low decades ago or the earthquake catalogue were not complete in these zones, or both. Zone VI, X and XI show many earthquake free years even in this decade, which means in these zones the earthquake activity were very low even if the catalogue were not complete. In the control group, the earthquake free year zone appeared random and independent of the seismic density, and in all equal

  9. Determination of Size Distributions in Nanocrystalline Powders by TEM, XRD and SAXS

    DEFF Research Database (Denmark)

    Jensen, Henrik; Pedersen, Jørgen Houe; Jørgensen, Jens Erik

    2006-01-01

    Crystallite size distributions and particle size distributions were determined by TEM, XRD, and SAXS for three commercially available TiO2 samples and one homemade. The theoretical Guinier Model was fitted to the experimental data and compared to analytical expressions. Modeling of the XRD spectra...... the size distribution obtained from the XRD experiments; however, a good agreement was obtained between the two techniques. Electron microscopy, SEM and TEM, confirmed the primary particle sizes, the size distributions, and the shapes obtained by XRD and SAXS. The SSEC78 powder and the commercially...

  10. It's Our Fault: better defining earthquake risk in Wellington, New Zealand

    Science.gov (United States)

    Van Dissen, R.; Brackley, H. L.; Francois-Holden, C.

    2012-12-01

    The Wellington region, home of New Zealand's capital city, is cut by a number of major right-lateral strike slip faults, and is underlain by the currently locked west-dipping subduction interface between the down going Pacific Plate, and the over-riding Australian Plate. In its short historic period (ca. 160 years), the region has been impacted by large earthquakes on the strike-slip faults, but has yet to bear the brunt of a subduction interface rupture directly beneath the capital city. It's Our Fault is a comprehensive study of Wellington's earthquake risk. Its objective is to position the capital city of New Zealand to become more resilient through an encompassing study of the likelihood of large earthquakes, and the effects and impacts of these earthquakes on humans and the built environment. It's Our Fault is jointly funded by New Zealand's Earthquake Commission, Accident Compensation Corporation, Wellington City Council, Wellington Region Emergency Management Group, Greater Wellington Regional Council, and Natural Hazards Research Platform. The programme has been running for six years, and key results to date include better definition and constraints on: 1) location, size, timing, and likelihood of large earthquakes on the active faults closest to Wellington; 2) earthquake size and ground shaking characterization of a representative suite of subduction interface rupture scenarios under Wellington; 3) stress interactions between these faults; 4) geological, geotechnical, and geophysical parameterisation of the near-surface sediments and basin geometry in Wellington City and the Hutt Valley; and 5) characterisation of earthquake ground shaking behaviour in these two urban areas in terms of subsoil classes specified in the NZ Structural Design Standard. The above investigations are already supporting measures aimed at risk reduction, and collectively they will facilitate identification of additional actions that will have the greatest benefit towards further

  11. Earthquake evaluation of a substation network

    International Nuclear Information System (INIS)

    Matsuda, E.N.; Savage, W.U.; Williams, K.K.; Laguens, G.C.

    1991-01-01

    The impact of the occurrence of a large, damaging earthquake on a regional electric power system is a function of the geographical distribution of strong shaking, the vulnerability of various types of electric equipment located within the affected region, and operational resources available to maintain or restore electric system functionality. Experience from numerous worldwide earthquake occurrences has shown that seismic damage to high-voltage substation equipment is typically the reason for post-earthquake loss of electric service. In this paper, the authors develop and apply a methodology to analyze earthquake impacts on Pacific Gas and Electric Company's (PG and E's) high-voltage electric substation network in central and northern California. The authors' objectives are to identify and prioritize ways to reduce the potential impact of future earthquakes on our electric system, refine PG and E's earthquake preparedness and response plans to be more realistic, and optimize seismic criteria for future equipment purchases for the electric system

  12. 1/f and the Earthquake Problem: Scaling constraints that facilitate operational earthquake forecasting

    Science.gov (United States)

    yoder, M. R.; Rundle, J. B.; Turcotte, D. L.

    2012-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard

  13. An application of earthquake prediction algorithm M8 in eastern ...

    Indian Academy of Sciences (India)

    2Institute of Earthquake Prediction Theory and Mathematical Geophysics, ... located about 70 km from a preceding M7.3 earthquake that occurred in ... local extremes of the seismic density distribution, and in the third approach, CI centers were distributed ...... Bird P 2003 An updated digital model of plate boundaries;.

  14. Comparison of aftershock sequences between 1975 Haicheng earthquake and 1976 Tangshan earthquake

    Science.gov (United States)

    Liu, B.

    2017-12-01

    The 1975 ML 7.3 Haicheng earthquake and the 1976 ML 7.8 Tangshan earthquake occurred in the same tectonic unit. There are significant differences in spatial-temporal distribution, number of aftershocks and time duration for the aftershock sequence followed by these two main shocks. As we all know, aftershocks could be triggered by the regional seismicity change derived from the main shock, which was caused by the Coulomb stress perturbation. Based on the rate- and state- dependent friction law, we quantitative estimated the possible aftershock time duration with a combination of seismicity data, and compared the results from different approaches. The results indicate that, aftershock time durations from the Tangshan main shock is several times of that form the Haicheng main shock. This can be explained by the significant relationship between aftershock time duration and earthquake nucleation history, normal stressand shear stress loading rateon the fault. In fact the obvious difference of earthquake nucleation history from these two main shocks is the foreshocks. 1975 Haicheng earthquake has clear and long foreshocks, while 1976 Tangshan earthquake did not have clear foreshocks. In that case, abundant foreshocks may mean a long and active nucleation process that may have changed (weakened) the rocks in the source regions, so they should have a shorter aftershock sequences for the reason that stress in weak rocks decay faster.

  15. On the agreement between small-world-like OFC model and real earthquakes

    International Nuclear Information System (INIS)

    Ferreira, Douglas S.R.; Papa, Andrés R.R.; Menezes, Ronaldo

    2015-01-01

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places

  16. On the agreement between small-world-like OFC model and real earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Douglas S.R., E-mail: douglas.ferreira@ifrj.edu.br [Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro, Paracambi, RJ (Brazil); Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Papa, Andrés R.R., E-mail: papa@on.br [Geophysics Department, Observatório Nacional, Rio de Janeiro, RJ (Brazil); Instituto de Física, Universidade do Estado do Rio de Janeiro, Rio de Janeiro, RJ (Brazil); Menezes, Ronaldo, E-mail: rmenezes@cs.fit.edu [BioComplex Laboratory, Computer Sciences, Florida Institute of Technology, Melbourne (United States)

    2015-03-20

    In this article we implemented simulations of the OFC model for earthquakes for two different topologies: regular and small-world, where in the latter the links are randomly rewired with probability p. In both topologies, we have studied the distribution of time intervals between consecutive earthquakes and the border effects present in each one. In addition, we also have characterized the influence that the probability p produces in certain characteristics of the lattice and in the intensity of border effects. From the two topologies, networks of consecutive epicenters were constructed, that allowed us to analyze the distribution of connectivities of each one. In our results distributions arise belonging to a family of non-traditional distributions functions, which agrees with previous studies using data from actual earthquakes. Our results reinforce the idea that the Earth is in a critical self-organized state and furthermore point towards temporal and spatial correlations between earthquakes in different places. - Highlights: • OFC model simulations for regular and small-world topologies. • For small-world topology distributions agree remarkably well with actual earthquakes. • Reinforce the idea of a critical self-organized state for the Earth's crust. • Point towards temporal and spatial correlations between far earthquakes in far places.

  17. Ground motion modeling of the 1906 San Francisco earthquake II: Ground motion estimates for the 1906 earthquake and scenario events

    Energy Technology Data Exchange (ETDEWEB)

    Aagaard, B; Brocher, T; Dreger, D; Frankel, A; Graves, R; Harmsen, S; Hartzell, S; Larsen, S; McCandless, K; Nilsson, S; Petersson, N A; Rodgers, A; Sjogreen, B; Tkalcic, H; Zoback, M L

    2007-02-09

    We estimate the ground motions produced by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  18. Juvenile Penaeid Shrimp Density, Spatial Distribution and Size ...

    African Journals Online (AJOL)

    The effects of habitat characteristics (mangrove creek, sandflat, mudflat and seagrass meadow) water salinity, temperature, and depth on the density, spatial distribution and size distribution of juveniles of five commercially important penaied shrimp species (Metapenaus monoceros, M. stebbingi, Fenneropenaeus indicus, ...

  19. Aftershock distribution as a constraint on the geodetic model of coseismic slip for the 2004 Parkfield earthquake

    Science.gov (United States)

    Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,

    2011-01-01

    Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.

  20. Preliminary results on landslides triggered by the Mw 7.8 Kaikoura earthquake of 14 November 2016 in northeast South Island, New Zealand

    Science.gov (United States)

    Gorum, Tolga; Yildirim, Cengiz

    2017-04-01

    This study presents the first results on analysis of the landslides triggered by the Mw 7.8 Kaikoura earthquake that occurred on November 14, 2016 in the region between the Hikurangi subduction system of the North Island and the oblique collisional regime of the South Island (Alpine Fault). The earthquake ruptured several faults that expand into two different tectonic domains which are compose of the strike-slip Marlborough fault system and the compressional North Canterbury Fault Zone. Here we present the preliminary mapping results of the distribution of landslides triggered by the earthquake. An extensive landslide interpretation was carried out using sets of optical high resolution satellite images (e.g. Sentinel-2 and Göktürk-2) for both the pre- and post-earthquake situation. The landslides were identified and mapped as polygons using multi-temporal visual image interpretation based on satellite imagery and morphological elements of landslide diagnostic indicators. Nearly 8,500 individual landslides with different sizes and types were mapped. The distribution pattern of the mapped coseismic landslides shows that the slope failures are highly concentrated along the ruptured faults and side slopes of the structurally controlled major rivers such as Hapuku and Clarence Rivers that drain the northeastern slopes of the region. Our spatial analysis of landslide occurrences with ground acceleration, lithology, slope, topographic relief and surface deformation indicated extensive control of steep slope and high topographic relief on landslides with ground acceleration as the trigger. We show that spatial distribution of slope failures shows decreasing frequency away from the earthquake faults up to 25 km towards east, and abundance of landslides spatially coincides with the coseismic fault geometries and aftershock distributions. We conclude that combined effect of complex rupture dynamics and topography primarily control the distribution pattern of the landslides

  1. Along-strike variations in fault frictional properties along the San Andreas Fault near Cholame, California from joint earthquake and low-frequency earthquake relocations

    Science.gov (United States)

    Harrington, Rebecca M.; Cochran, Elizabeth S.; Griffiths, Emily M.; Zeng, Xiangfang; Thurber, Clifford H.

    2016-01-01

    Recent observations of low‐frequency earthquakes (LFEs) and tectonic tremor along the Parkfield–Cholame segment of the San Andreas fault suggest slow‐slip earthquakes occur in a transition zone between the shallow fault, which accommodates slip by a combination of aseismic creep and earthquakes (fault, which accommodates slip by stable sliding (>35  km depth). However, the spatial relationship between shallow earthquakes and LFEs remains unclear. Here, we present precise relocations of 34 earthquakes and 34 LFEs recorded during a temporary deployment of 13 broadband seismic stations from May 2010 to July 2011. We use the temporary array waveform data, along with data from permanent seismic stations and a new high‐resolution 3D velocity model, to illuminate the fine‐scale details of the seismicity distribution near Cholame and the relation to the distribution of LFEs. The depth of the boundary between earthquakes and LFE hypocenters changes along strike and roughly follows the 350°C isotherm, suggesting frictional behavior may be, in part, thermally controlled. We observe no overlap in the depth of earthquakes and LFEs, with an ∼5  km separation between the deepest earthquakes and shallowest LFEs. In addition, clustering in the relocated seismicity near the 2004 Mw 6.0 Parkfield earthquake hypocenter and near the northern boundary of the 1857 Mw 7.8 Fort Tejon rupture may highlight areas of frictional heterogeneities on the fault where earthquakes tend to nucleate.

  2. Investigating Landslides Caused by Earthquakes A Historical Review

    Science.gov (United States)

    Keefer, David K.

    Post-earthquake field investigations of landslide occurrence have provided a basis for understanding, evaluating, and mapping the hazard and risk associated withearthquake-induced landslides. This paper traces thehistorical development of knowledge derived from these investigations. Before 1783, historical accounts of the occurrence of landslides in earthquakes are typically so incomplete and vague that conclusions based on these accounts are of limited usefulness. For example, the number of landslides triggered by a given event is almost always greatly underestimated. The first formal, scientific post-earthquake investigation that included systematic documentation of the landslides was undertaken in the Calabria region of Italy after the 1783 earthquake swarm. From then until the mid-twentieth century, the best information on earthquake-induced landslides came from a succession ofpost-earthquake investigations largely carried out by formal commissions that undertook extensive ground-based field studies. Beginning in the mid-twentieth century, when the use of aerial photography became widespread, comprehensive inventories of landslide occurrence have been made for several earthquakes in the United States, Peru, Guatemala, Italy, El Salvador, Japan, and Taiwan. Techniques have also been developed for performing ``retrospective'' analyses years or decades after an earthquake that attempt to reconstruct the distribution of landslides triggered by the event. The additional use of Geographic Information System (GIS) processing and digital mapping since about 1989 has greatly facilitated the level of analysis that can applied to mapped distributions of landslides. Beginning in 1984, syntheses of worldwide and national data on earthquake-induced landslides have defined their general characteristics and relations between their occurrence and various geologic and seismic parameters. However, the number of comprehensive post-earthquake studies of landslides is still

  3. Adaptively smoothed seismicity earthquake forecasts for Italy

    Directory of Open Access Journals (Sweden)

    Yan Y. Kagan

    2010-11-01

    Full Text Available We present a model for estimation of the probabilities of future earthquakes of magnitudes m ≥ 4.95 in Italy. This model is a modified version of that proposed for California, USA, by Helmstetter et al. [2007] and Werner et al. [2010a], and it approximates seismicity using a spatially heterogeneous, temporally homogeneous Poisson point process. The temporal, spatial and magnitude dimensions are entirely decoupled. Magnitudes are independently and identically distributed according to a tapered Gutenberg-Richter magnitude distribution. We have estimated the spatial distribution of future seismicity by smoothing the locations of past earthquakes listed in two Italian catalogs: a short instrumental catalog, and a longer instrumental and historic catalog. The bandwidth of the adaptive spatial kernel is estimated by optimizing the predictive power of the kernel estimate of the spatial earthquake density in retrospective forecasts. When available and reliable, we used small earthquakes of m ≥ 2.95 to reveal active fault structures and 29 probable future epicenters. By calibrating the model with these two catalogs of different durations to create two forecasts, we intend to quantify the loss (or gain of predictability incurred when only a short, but recent, data record is available. Both forecasts were scaled to five and ten years, and have been submitted to the Italian prospective forecasting experiment of the global Collaboratory for the Study of Earthquake Predictability (CSEP. An earlier forecast from the model was submitted by Helmstetter et al. [2007] to the Regional Earthquake Likelihood Model (RELM experiment in California, and with more than half of the five-year experimental period over, the forecast has performed better than the others.

  4. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    Science.gov (United States)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  5. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    Science.gov (United States)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  6. Recent developments in the Dutch firm-size distribution

    NARCIS (Netherlands)

    M.A. Carree (Martin); A.R. Thurik (Roy)

    1991-01-01

    textabstractThis study investigates the development of the firm-size distribution in the Netherlands using various measures. Data are used for the period 1978 through 1989 covering practically the entire Dutch private sector. The results show a general tendency towards smaller firm sizes in

  7. Width of surface rupture zone for thrust earthquakes: implications for earthquake fault zoning

    Directory of Open Access Journals (Sweden)

    P. Boncio

    2018-01-01

    remove outliers (e.g. 90 % probability of the cumulative distribution function and define the zone where the likelihood of having surface ruptures is the highest. This might help in sizing the zones of SFRH during seismic microzonation (SM mapping. In order to shape zones of SFRH, a very detailed earthquake geologic study of the fault is necessary (the highest level of SM, i.e. Level 3 SM according to Italian guidelines. In the absence of such a very detailed study (basic SM, i.e. Level 1 SM of Italian guidelines a width of  ∼  840 m (90 % probability from "simple thrust" database of distributed ruptures, excluding B-M, F-S and Sy fault ruptures is suggested to be sufficiently precautionary. For more detailed SM, where the fault is carefully mapped, one must consider that the highest SFRH is concentrated in a narrow zone,  ∼ 60 m in width, that should be considered as a fault avoidance zone (more than one-third of the distributed ruptures are expected to occur within this zone. The fault rupture hazard zones should be asymmetric compared to the trace of the principal fault. The average footwall to hanging wall ratio (FW  :  HW is close to 1  :  2 in all analysed cases. These criteria are applicable to "simple thrust" faults, without considering possible B-M or F-S fault ruptures due to large-scale folding, and without considering sympathetic slip on distant faults. Areas potentially susceptible to B-M or F-S fault ruptures should have their own zones of fault rupture hazard that can be defined by detailed knowledge of the structural setting of the area (shape, wavelength, tightness and lithology of the thrust-related large-scale folds and by geomorphic evidence of past secondary faulting. Distant active faults, potentially susceptible to sympathetic triggering, should be zoned as separate principal faults. The entire database of distributed ruptures (including B-M, F-S and Sy fault ruptures can be useful in poorly known areas

  8. Evaluation of Earthquake-Induced Effects on Neighbouring Faults and Volcanoes: Application to the 2016 Pedernales Earthquake

    Science.gov (United States)

    Bejar, M.; Alvarez Gomez, J. A.; Staller, A.; Luna, M. P.; Perez Lopez, R.; Monserrat, O.; Chunga, K.; Herrera, G.; Jordá, L.; Lima, A.; Martínez-Díaz, J. J.

    2017-12-01

    It has long been recognized that earthquakes change the stress in the upper crust around the fault rupture and can influence the short-term behaviour of neighbouring faults and volcanoes. Rapid estimates of these stress changes can provide the authorities managing the post-disaster situation with a useful tool to identify and monitor potential threads and to update the estimates of seismic and volcanic hazard in a region. Space geodesy is now routinely used following an earthquake to image the displacement of the ground and estimate the rupture geometry and the distribution of slip. Using the obtained source model, it is possible to evaluate the remaining moment deficit and to infer the stress changes on nearby faults and volcanoes produced by the earthquake, which can be used to identify which faults and volcanoes are brought closer to failure or activation. Although these procedures are commonly used today, the transference of these results to the authorities managing the post-disaster situation is not straightforward and thus its usefulness is reduced in practice. Here we propose a methodology to evaluate the potential influence of an earthquake on nearby faults and volcanoes and create easy-to-understand maps for decision-making support after an earthquake. We apply this methodology to the Mw 7.8, 2016 Ecuador earthquake. Using Sentinel-1 SAR and continuous GPS data, we measure the coseismic ground deformation and estimate the distribution of slip. Then we use this model to evaluate the moment deficit on the subduction interface and changes of stress on the surrounding faults and volcanoes. The results are compared with the seismic and volcanic events that have occurred after the earthquake. We discuss potential and limits of the methodology and the lessons learnt from discussion with local authorities.

  9. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  10. The mechanism of earthquake

    Science.gov (United States)

    Lu, Kunquan; Cao, Zexian; Hou, Meiying; Jiang, Zehui; Shen, Rong; Wang, Qiang; Sun, Gang; Liu, Jixing

    2018-03-01

    The physical mechanism of earthquake remains a challenging issue to be clarified. Seismologists used to attribute shallow earthquake to the elastic rebound of crustal rocks. The seismic energy calculated following the elastic rebound theory and with the data of experimental results upon rocks, however, shows a large discrepancy with measurement — a fact that has been dubbed as “the heat flow paradox”. For the intermediate-focus and deep-focus earthquakes, both occurring in the region of the mantle, there is not reasonable explanation either. This paper will discuss the physical mechanism of earthquake from a new perspective, starting from the fact that both the crust and the mantle are discrete collective system of matters with slow dynamics, as well as from the basic principles of physics, especially some new concepts of condensed matter physics emerged in the recent years. (1) Stress distribution in earth’s crust: Without taking the tectonic force into account, according to the rheological principle of “everything flows”, the normal stress and transverse stress must be balanced due to the effect of gravitational pressure over a long period of time, thus no differential stress in the original crustal rocks is to be expected. The tectonic force is successively transferred and accumulated via stick-slip motions of rock blocks to squeeze the fault gouge and then exerted upon other rock blocks. The superposition of such additional lateral tectonic force and the original stress gives rise to the real-time stress in crustal rocks. The mechanical characteristics of fault gouge are different from rocks as it consists of granular matters. The elastic moduli of the fault gouges are much less than those of rocks, and they become larger with increasing pressure. This peculiarity of the fault gouge leads to a tectonic force increasing with depth in a nonlinear fashion. The distribution and variation of the tectonic stress in the crust are specified. (2) The

  11. NON-COHESIVE SOILS’ COMPRESSIBILITY AND UNEVEN GRAIN-SIZE DISTRIBUTION RELATION

    Directory of Open Access Journals (Sweden)

    Anatoliy Mirnyy

    2016-03-01

    Full Text Available This paper presents the results of laboratory investigation of soil compression phases with consideration of various granulometric composition. Materials and Methods Experimental soil box with microscale video recording for compression phases studies is described. Photo and video materials showing the differences of microscale particle movements were obtained for non-cohesive soils with different grain-size distribution. Results The analysis of the compression tests results and elastic and plastic deformations separation allows identifying each compression phase. It is shown, that soil density is correlating with deformability parameters only for the same grain-size distribution. Basing on the test results the authors suggest that compaction ratio is not sufficient for deformability estimating without grain-size distribution taken into account. Discussion and Conclusions Considering grain-size distribution allows refining technological requirements for artificial soil structures, backfills, and sand beds. Further studies could be used for developing standard documents, SP45.13330.2012 in particular.

  12. Size distribution of BaF2 nanocrystallites in transparent glass ceramics

    International Nuclear Information System (INIS)

    Bocker, Christian; Bhattacharyya, Somnath; Hoeche, Thomas; Ruessel, Christian

    2009-01-01

    In glasses with the composition 1.9 Na 2 O-15 K 2 O-7.5 Al 2 O 3 -69.6 SiO 2 -6 BaF 2 (in mol.%), BaF 2 nanocrystalline precipitates are formed upon heat treatment. Using dark-field and bright-field transmission electron micrographs, crystallite size distributions are obtained for samples crystallized at various temperatures. According to the 'tomato-salad problem', the size distributions are corrected and then compared to various theories of grain growth taking into account coarsening of the crystallites during heat treatment. The experimental crystallite size distributions show for smaller mean crystallite sizes a more symmetric shape in comparison to the theories of Lifshitz-Slyozov-Wagner (LSW) or Brailsford and Wynblatt (B and W). With increasing mean crystallite sizes to about 18 nm at higher heat-treatment temperatures, the full width at half maximum of the observed distributions decreases and becomes even narrower than the LSW function. These findings indicate that in the investigated nano glass ceramics no coarsening by Ostwald ripening or coalescence occurs. This is explained by the formation of a diffusion barrier around each nanocrystallite which limits the size of the crystallites and hence results in such a narrow and uniform crystallite size distribution.

  13. Estimating particle number size distributions from multi-instrument observations with Kalman Filtering

    Energy Technology Data Exchange (ETDEWEB)

    Viskari, T.

    2012-07-01

    Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden

  14. Connecting slow earthquakes to huge earthquakes.

    Science.gov (United States)

    Obara, Kazushige; Kato, Aitaro

    2016-07-15

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of their high sensitivity to stress changes in the seismogenic zone. Episodic stress transfer to megathrust source faults leads to an increased probability of triggering huge earthquakes if the adjacent locked region is critically loaded. Careful and precise monitoring of slow earthquakes may provide new information on the likelihood of impending huge earthquakes. Copyright © 2016, American Association for the Advancement of Science.

  15. Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.

    Science.gov (United States)

    Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T

    2007-03-30

    Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.

  16. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  17. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    Science.gov (United States)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California

  18. Tsunami hazard assessments with consideration of uncertain earthquakes characteristics

    Science.gov (United States)

    Sepulveda, I.; Liu, P. L. F.; Grigoriu, M. D.; Pritchard, M. E.

    2017-12-01

    The uncertainty quantification of tsunami assessments due to uncertain earthquake characteristics faces important challenges. First, the generated earthquake samples must be consistent with the properties observed in past events. Second, it must adopt an uncertainty propagation method to determine tsunami uncertainties with a feasible computational cost. In this study we propose a new methodology, which improves the existing tsunami uncertainty assessment methods. The methodology considers two uncertain earthquake characteristics, the slip distribution and location. First, the methodology considers the generation of consistent earthquake slip samples by means of a Karhunen Loeve (K-L) expansion and a translation process (Grigoriu, 2012), applicable to any non-rectangular rupture area and marginal probability distribution. The K-L expansion was recently applied by Le Veque et al. (2016). We have extended the methodology by analyzing accuracy criteria in terms of the tsunami initial conditions. Furthermore, and unlike this reference, we preserve the original probability properties of the slip distribution, by avoiding post sampling treatments such as earthquake slip scaling. Our approach is analyzed and justified in the framework of the present study. Second, the methodology uses a Stochastic Reduced Order model (SROM) (Grigoriu, 2009) instead of a classic Monte Carlo simulation, which reduces the computational cost of the uncertainty propagation. The methodology is applied on a real case. We study tsunamis generated at the site of the 2014 Chilean earthquake. We generate earthquake samples with expected magnitude Mw 8. We first demonstrate that the stochastic approach of our study generates consistent earthquake samples with respect to the target probability laws. We also show that the results obtained from SROM are more accurate than classic Monte Carlo simulations. We finally validate the methodology by comparing the simulated tsunamis and the tsunami records for

  19. Simultaneous estimation of earthquake source parameters and ...

    Indian Academy of Sciences (India)

    moderate-size aftershocks (Mw 2.1–5.1) of the Mw 7.7 2001 Bhuj earthquake. The horizontal- ... claimed a death toll of 20,000 people. This earth- .... quake occurred west of Kachchh, with an epicenter at 24. ◦. N, 68 ..... for dominance of body waves for R ≤ 100 km ...... Bhuj earthquake sequence; J. Asian Earth Sci. 40.

  20. Marmara Island earthquakes, of 1265 and 1935; Turkey

    Directory of Open Access Journals (Sweden)

    Y. Altınok

    2006-01-01

    Full Text Available The long-term seismicity of the Marmara Sea region in northwestern Turkey is relatively well-recorded. Some large and some of the smaller events are clearly associated with fault zones known to be seismically active, which have distinct morphological expressions and have generated damaging earthquakes before and later. Some less common and moderate size earthquakes have occurred in the vicinity of the Marmara Islands in the west Marmara Sea. This paper presents an extended summary of the most important earthquakes that have occurred in 1265 and 1935 and have since been known as the Marmara Island earthquakes. The informative data and the approaches used have therefore the potential of documenting earthquake ruptures of fault segments and may extend the records kept on earthquakes far before known history, rock falls and abnormal sea waves observed during these events, thus improving hazard evaluations and the fundamental understanding of the process of an earthquake.

  1. Characterizing the Temporal and Spatial Distribution of Earthquake Swarms in the Puerto Rico - Virgin Island Block

    Science.gov (United States)

    Hernandez, F. J.; Lopez, A. M.; Vanacore, E. A.

    2017-12-01

    The presence of earthquake swarms and clusters in the north and northeast of the island of Puerto Rico in the northeastern Caribbean have been recorded by the Puerto Rico Seismic Network (PRSN) since it started operations in 1974. Although clusters in the Puerto Rico-Virgin Island (PRVI) block have been observed for over forty years, the nature of their enigmatic occurrence is still poorly understood. In this study, the entire seismic catalog of the PRSN, of approximately 31,000 seismic events, has been limited to a sub-set of 18,000 events located all along north of Puerto Rico in an effort to characterize and understand the underlying mechanism of these clusters. This research uses two de-clustering methods to identify cluster events in the PRVI block. The first method, known as Model Independent Stochastic Declustering (MISD), filters the catalog sub-set into cluster and background seismic events, while the second method uses a spatio-temporal algorithm applied to the catalog in order to link the separate seismic events into clusters. After using these two methods, identified clusters were classified into either earthquake swarms or seismic sequences. Once identified, each cluster was analyzed to identify correlations against other clusters in their geographic region. Results from this research seek to : (1) unravel their earthquake clustering behavior through the use of different statistical methods and (2) better understand the mechanism for these clustering of earthquakes. Preliminary results have allowed to identify and classify 128 clusters categorized in 11 distinctive regions based on their centers, and their spatio-temporal distribution have been used to determine intra- and interplate dynamics.

  2. Connecting slow earthquakes to huge earthquakes

    OpenAIRE

    Obara, Kazushige; Kato, Aitaro

    2016-01-01

    Slow earthquakes are characterized by a wide spectrum of fault slip behaviors and seismic radiation patterns that differ from those of traditional earthquakes. However, slow earthquakes and huge megathrust earthquakes can have common slip mechanisms and are located in neighboring regions of the seismogenic zone. The frequent occurrence of slow earthquakes may help to reveal the physics underlying megathrust events as useful analogs. Slow earthquakes may function as stress meters because of th...

  3. On to what extent stresses resulting from the earth's surface trigger earthquakes

    Science.gov (United States)

    Klose, C. D.

    2009-12-01

    The debate on static versus dynamic earthquake triggering mainly concentrates on endogenous crustal forces, including fault-fault interactions or seismic wave transients of remote earthquakes. Incomprehensibly, earthquake triggering due to surface processes, however, still receives little scientific attention. This presentation continues a discussion on the hypothesis of how “tiny” stresses stemming from the earth's surface can trigger major earthquakes, such as for example, China's M7.9 Wenchuan earthquake of May 2008. This seismic event is thought to be triggered by up to 1.1 billion metric tons of water (~130m) that accumulated in the Minjiang River Valley at the eastern margin of the Longmen Shan. Specifically, the water level rose by ~80m (static), with additional seasonal water level changes of ~50m (dynamic). Two and a half years prior to mainshock, static and dynamic Coulomb failure stresses were induced on the nearby Beichuan thrust fault system at <17km depth. Triggering stresses were equivalent to levels of daily tides and perturbed a fault area measuring 416+/-96km^2. The mainshock ruptured after 2.5 years when only the static stressing regime was predominant and the transient stressing (seasonal water level) was infinitesimal small. The short triggering delay of about 2 years suggests that the Beichuan fault might have been near the end of its seismic cycle, which may also confirm what previous geological findings have indicated. This presentation shows on to what extend the static and 1-year periodic triggering stress perturbations a) accounted for equivalent tectonic loading, given a 4-10kyr earthquake cycle and b) altered the background seismicity beneath the valley, i.e., daily event rate and earthquake size distribution.

  4. Ground-motion modeling of the 1906 San Francisco Earthquake, part II: Ground-motion estimates for the 1906 earthquake and scenario events

    Science.gov (United States)

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; McCandless, K.; Nilsson, S.; Petersson, N.A.; Rodgers, A.; Sjogreen, B.; Zoback, M.L.

    2008-01-01

    We estimate the ground motions produce by the 1906 San Francisco earthquake making use of the recently developed Song et al. (2008) source model that combines the available geodetic and seismic observations and recently constructed 3D geologic and seismic velocity models. Our estimates of the ground motions for the 1906 earthquake are consistent across five ground-motion modeling groups employing different wave propagation codes and simulation domains. The simulations successfully reproduce the main features of the Boatwright and Bundock (2005) ShakeMap, but tend to over predict the intensity of shaking by 0.1-0.5 modified Mercalli intensity (MMI) units. Velocity waveforms at sites throughout the San Francisco Bay Area exhibit characteristics consistent with rupture directivity, local geologic conditions (e.g., sedimentary basins), and the large size of the event (e.g., durations of strong shaking lasting tens of seconds). We also compute ground motions for seven hypothetical scenarios rupturing the same extent of the northern San Andreas fault, considering three additional hypocenters and an additional, random distribution of slip. Rupture directivity exerts the strongest influence on the variations in shaking, although sedimentary basins do consistently contribute to the response in some locations, such as Santa Rosa, Livermore, and San Jose. These scenarios suggest that future large earthquakes on the northern San Andreas fault may subject the current San Francisco Bay urban area to stronger shaking than a repeat of the 1906 earthquake. Ruptures propagating southward towards San Francisco appear to expose more of the urban area to a given intensity level than do ruptures propagating northward.

  5. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin

    2015-02-03

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  6. Identifying Active Faults by Improving Earthquake Locations with InSAR Data and Bayesian Estimation: The 2004 Tabuk (Saudi Arabia) Earthquake Sequence

    KAUST Repository

    Xu, Wenbin; Dutta, Rishabh; Jonsson, Sigurjon

    2015-01-01

    A sequence of shallow earthquakes of magnitudes ≤5.1 took place in 2004 on the eastern flank of the Red Sea rift, near the city of Tabuk in northwestern Saudi Arabia. The earthquakes could not be well located due to the sparse distribution of seismic stations in the region, making it difficult to associate the activity with one of the many mapped faults in the area and thus to improve the assessment of seismic hazard in the region. We used Interferometric Synthetic Aperture Radar (InSAR) data from the European Space Agency’s Envisat and ERS‐2 satellites to improve the location and source parameters of the largest event of the sequence (Mw 5.1), which occurred on 22 June 2004. The mainshock caused a small but distinct ∼2.7  cm displacement signal in the InSAR data, which reveals where the earthquake took place and shows that seismic reports mislocated it by 3–16 km. With Bayesian estimation, we modeled the InSAR data using a finite‐fault model in a homogeneous elastic half‐space and found the mainshock activated a normal fault, roughly 70 km southeast of the city of Tabuk. The southwest‐dipping fault has a strike that is roughly parallel to the Red Sea rift, and we estimate the centroid depth of the earthquake to be ∼3.2  km. Projection of the fault model uncertainties to the surface indicates that one of the west‐dipping normal faults located in the area and oriented parallel to the Red Sea is a likely source for the mainshock. The results demonstrate how InSAR can be used to improve locations of moderate‐size earthquakes and thus to identify currently active faults.

  7. A rare moderate‐sized (Mw 4.9) earthquake in Kansas: Rupture process of the Milan, Kansas, earthquake of 12 November 2014 and its relationship to fluid injection

    Science.gov (United States)

    Choy, George; Rubinstein, Justin L.; Yeck, William; McNamara, Daniel E.; Mueller, Charles; Boyd, Oliver

    2016-01-01

    The largest recorded earthquake in Kansas occurred northeast of Milan on 12 November 2014 (Mw 4.9) in a region previously devoid of significant seismic activity. Applying multistation processing to data from local stations, we are able to detail the rupture process and rupture geometry of the mainshock, identify the causative fault plane, and delineate the expansion and extent of the subsequent seismic activity. The earthquake followed rapid increases of fluid injection by multiple wastewater injection wells in the vicinity of the fault. The source parameters and behavior of the Milan earthquake and foreshock–aftershock sequence are similar to characteristics of other earthquakes induced by wastewater injection into permeable formations overlying crystalline basement. This earthquake also provides an opportunity to test the empirical relation that uses felt area to estimate moment magnitude for historical earthquakes for Kansas.

  8. Geological structure of Osaka basin and characteristic distributions of structural damage caused by earthquake; Osaka bonchi kozo to shingai tokusei

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, K; Shiono, K; Inoue, N; Senda, S [Osaka City University, Osaka (JP. Faculty of Science); Ryoki, K [Osaka Polytechnic Collage, Osaka (Japan); Shichi, R [Nagoya University, Nagoya (Japan). Faculty of Science

    1996-05-01

    The paper investigates relations between the damage caused by the Hyogo-ken Nanbu earthquake and the deep underground structures. A characteristic of the earthquake damage distribution is that the damage concentrated near faults. Most of the damages were seen on the side of faults` relatively falling rather than right above the faults and of their slightly slanting to the seaside. Distribution like this seems to be closely related to underground structures. Therefore, a distribution map of the depth of basement granite in Osaka sedimentary basin was drawn, referring to the data on basement rock depth obtained from the distribution map of gravity anomaly and the result of the survey using the air gun reflection method. Moreover, cubic underground structures were determined by 3-D gravity analysis. The result was concluded as follows: when observing the M7 zone of the low land, in particular, where the damage was great from an aspect of gravity anomaly, the basement rock below the zone declined near the cliff toward the sea, which indicates a great possibility of its being a fault. There is a high possibility that the zone suffered mostly from the damage caused by focusing by refraction and total reflection of seismic wave rays. 3 refs., 8 figs.

  9. Historical earthquake investigations in Greece

    Directory of Open Access Journals (Sweden)

    K. Makropoulos

    2004-06-01

    Full Text Available The active tectonics of the area of Greece and its seismic activity have always been present in the country?s history. Many researchers, tempted to work on Greek historical earthquakes, have realized that this is a task not easily fulfilled. The existing catalogues of strong historical earthquakes are useful tools to perform general SHA studies. However, a variety of supporting datasets, non-uniformly distributed in space and time, need to be further investigated. In the present paper, a review of historical earthquake studies in Greece is attempted. The seismic history of the country is divided into four main periods. In each one of them, characteristic examples, studies and approaches are presented.

  10. Urban MEMS based seismic network for post-earthquakes rapid disaster assessment

    Science.gov (United States)

    D'Alessandro, Antonino; Luzio, Dario; D'Anna, Giuseppe

    2014-05-01

    Life losses following disastrous earthquake depends mainly by the building vulnerability, intensity of shaking and timeliness of rescue operations. In recent decades, the increase in population and industrial density has significantly increased the exposure to earthquakes of urban areas. The potential impact of a strong earthquake on a town center can be reduced by timely and correct actions of the emergency management centers. A real time urban seismic network can drastically reduce casualties immediately following a strong earthquake, by timely providing information about the distribution of the ground shaking level. Emergency management centers, with functions in the immediate post-earthquake period, could be use this information to allocate and prioritize resources to minimize loss of human life. However, due to the high charges of the seismological instrumentation, the realization of an urban seismic network, which may allow reducing the rate of fatalities, has not been achieved. Recent technological developments in MEMS (Micro Electro-Mechanical Systems) technology could allow today the realization of a high-density urban seismic network for post-earthquakes rapid disaster assessment, suitable for the earthquake effects mitigation. In the 1990s, MEMS accelerometers revolutionized the automotive-airbag system industry and are today widely used in laptops, games controllers and mobile phones. Due to their great commercial successes, the research into and development of MEMS accelerometers are actively pursued around the world. Nowadays, the sensitivity and dynamics of these sensors are such to allow accurate recording of earthquakes with moderate to strong magnitude. Due to their low cost and small size, the MEMS accelerometers may be employed for the realization of high-density seismic networks. The MEMS accelerometers could be installed inside sensitive places (high vulnerability and exposure), such as schools, hospitals, public buildings and places of

  11. Seismicity map tools for earthquake studies

    Science.gov (United States)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  12. Presentation and analysis of a worldwide database of earthquake-induced landslide inventories

    Science.gov (United States)

    Tanyas, Hakan; van Westen, Cees J.; Allstadt, Kate E.; Nowicki Jessee, M. Anna; Gorum, Tolga; Jibson, Randall W.; Godt, Jonathan W.; Sato, Hiroshi P.; Schmitt, Robert G.; Marc, Odin; Hovius, Niels

    2017-01-01

    Earthquake-induced landslide (EQIL) inventories are essential tools to extend our knowledge of the relationship between earthquakes and the landslides they can trigger. Regrettably, such inventories are difficult to generate and therefore scarce, and the available ones differ in terms of their quality and level of completeness. Moreover, access to existing EQIL inventories is currently difficult because there is no centralized database. To address these issues, we compiled EQIL inventories from around the globe based on an extensive literature study. The database contains information on 363 landslide-triggering earthquakes and includes 66 digital landslide inventories. To make these data openly available, we created a repository to host the digital inventories that we have permission to redistribute through the U.S. Geological Survey ScienceBase platform. It can grow over time as more authors contribute their inventories. We analyze the distribution of EQIL events by time period and location, more specifically breaking down the distribution by continent, country, and mountain region. Additionally, we analyze frequency distributions of EQIL characteristics, such as the approximate area affected by landslides, total number of landslides, maximum distance from fault rupture zone, and distance from epicenter when the fault plane location is unknown. For the available digital EQIL inventories, we examine the underlying characteristics of landslide size, topographic slope, roughness, local relief, distance to streams, peak ground acceleration, peak ground velocity, and Modified Mercalli Intensity. Also, we present an evaluation system to help users assess the suitability of the available inventories for different types of EQIL studies and model development.

  13. Estimation of particle size distribution of nanoparticles from electrical ...

    Indian Academy of Sciences (India)

    2018-02-02

    Feb 2, 2018 ... An indirect method of estimation of size distribution of nanoparticles in a nanocomposite is ... The present approach exploits DC electrical current–voltage ... the sizes of nanoparticles (NPs) by electrical characterization.

  14. Phase characteristics of earthquake accelerogram and its application

    International Nuclear Information System (INIS)

    Ohsaki, Y.; Iwasaki, R.; Ohkawa, I.; Masao, T.

    1979-01-01

    As the input earthquake motion for seismic design of nuclear power plant structures and equipments, an artificial time history compatible with smoothed design response spectrum is frequently used. This paper deals with a wave generation technique based on phase characteristics in earthquake accelerograms as an alternate of envelope time function. The concept of 'phase differences' distribution' is defined to represent phase characteristics of earthquake motion. The procedure proposed in this paper consists of following steps; (1) Specify a design response spectrum and derive a corresponding initial modal amplitude. (2) Determine a phase differences' distribution corresponding to an envelope function, the shape of which is dependent on magnitude and epicentral distance of an earthquake. (3) Derive the phase angles at all modal frequencies from the phase differences' distribution. (4) Generate a time history by inverse Fourier transeform on the basis of the amplitudes and the phase angles thus determined. (5) Calculate the response spectrum. (6) Compare the specified and calculated response spectra, and correct the amplitude at each frequency so that the response spectrum will be consistent with the specified. (7) Repeat the steps 4 through 6, until the specified and calculated response spectra become consistent with sufficient accuracy. (orig.)

  15. Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network

    OpenAIRE

    Tomohiko Konno

    2013-01-01

    The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).

  16. Do I Really Sound Like That? Communicating Earthquake Science Following Significant Earthquakes at the NEIC

    Science.gov (United States)

    Hayes, G. P.; Earle, P. S.; Benz, H.; Wald, D. J.; Yeck, W. L.

    2017-12-01

    The U.S. Geological Survey's National Earthquake Information Center (NEIC) responds to about 160 magnitude 6.0 and larger earthquakes every year and is regularly inundated with information requests following earthquakes that cause significant impact. These requests often start within minutes after the shaking occurs and come from a wide user base including the general public, media, emergency managers, and government officials. Over the past several years, the NEIC's earthquake response has evolved its communications strategy to meet the changing needs of users and the evolving media landscape. The NEIC produces a cascade of products starting with basic hypocentral parameters and culminating with estimates of fatalities and economic loss. We speed the delivery of content by prepositioning and automatically generating products such as, aftershock plots, regional tectonic summaries, maps of historical seismicity, and event summary posters. Our goal is to have information immediately available so we can quickly address the response needs of a particular event or sequence. This information is distributed to hundreds of thousands of users through social media, email alerts, programmatic data feeds, and webpages. Many of our products are included in event summary posters that can be downloaded and printed for local display. After significant earthquakes, keeping up with direct inquiries and interview requests from TV, radio, and print reports is always challenging. The NEIC works with the USGS Office of Communications and the USGS Science Information Services to organize and respond to these requests. Written executive summaries reports are produced and distributed to USGS personnel and collaborators throughout the country. These reports are updated during the response to keep our message consistent and information up to date. This presentation will focus on communications during NEIC's rapid earthquake response but will also touch on the broader USGS traditional and

  17. Oklahoma’s recent earthquakes and saltwater disposal

    Science.gov (United States)

    Walsh, F. Rall; Zoback, Mark D.

    2015-01-01

    Over the past 5 years, parts of Oklahoma have experienced marked increases in the number of small- to moderate-sized earthquakes. In three study areas that encompass the vast majority of the recent seismicity, we show that the increases in seismicity follow 5- to 10-fold increases in the rates of saltwater disposal. Adjacent areas where there has been relatively little saltwater disposal have had comparatively few recent earthquakes. In the areas of seismic activity, the saltwater disposal principally comes from “produced” water, saline pore water that is coproduced with oil and then injected into deeper sedimentary formations. These formations appear to be in hydraulic communication with potentially active faults in crystalline basement, where nearly all the earthquakes are occurring. Although most of the recent earthquakes have posed little danger to the public, the possibility of triggering damaging earthquakes on potentially active basement faults cannot be discounted. PMID:26601200

  18. Measurement of size distribution for 220Rn progeny attached aerosols

    International Nuclear Information System (INIS)

    Zhang Lei; Guo Qiuju; Zhuo Weihai

    2008-01-01

    The size distribution of radioactive aerosols is a very important factor for evaluating the inner exposure dose contributed by radon and thoron progeny in environments. In order to measure the size distribution of thoron progeny attached radioactive aerosols, a device was developed using wire screens. The count median diameter (CMD) and the geometric standard deviation (GSD) of attached radioactive aerosols were calculated by collecting ThB and using CR-39 as detector. Field measurement results at Yangjiang City in Guangdong Province show that the CMDs distribute between 30 and 130 nm, and the GSDs are between 1.9 and 3.3. It also shows that the more humid country, the smaller CMDs, and the ventilation has great influence on the size distribution of aerosols. The CMDs of adobe house are smaller than that of the concrete houses. (authors)

  19. Particle-size distribution study: PILEDRIVER event

    Energy Technology Data Exchange (ETDEWEB)

    Rabb, David D [Lawrence Radiation Laboratory, University of California, Livermore, CA (United States)

    1970-05-15

    Reentry was made by mining into the chimney of broken rock created by a nuclear detonation in granite at a depth of 1500 feet. The chimney was 160 ft in radius and 890 ft high. An injection of radioactive melt was encountered at 300 ft from shot point. Radiochemical analyses determined that the yield of PILEDRIVER nuclear device was 61 {+-} 10 kt. Two samples of chimney rubble totalling over 5,000 lb were obtained during the postshot exploration. These samples of broken granite underwent screen analysis, a radioactivity-distribution study, and cursory leaching tests. The two samples were separated into 25 different size-fractions. An average of the particle-size data from the two samples showed that 17% of the material is between 20 mesh and I in.; 42% between 1 and 6 in.; and 34% between 6 in. and 3 ft. The distribution of radioactivity varies markedly with the particle size. The minus 100-mesh material comprizes less than 1.5% of the weight but contains almost 20% of the radioactivity. Small-scale batch-leaching tests showed that 25% of the radioactivity could be removed in a few hours by a film-percolation leach with distilled water, and 40% with dilute acid. Brief studies were made of the microfractures in the broken rock and of the radioactivity created by the PILEDRIVER explosion. (author)

  20. Particle-size distribution study: PILEDRIVER event

    International Nuclear Information System (INIS)

    Rabb, David D.

    1970-01-01

    Reentry was made by mining into the chimney of broken rock created by a nuclear detonation in granite at a depth of 1500 feet. The chimney was 160 ft in radius and 890 ft high. An injection of radioactive melt was encountered at 300 ft from shot point. Radiochemical analyses determined that the yield of PILEDRIVER nuclear device was 61 ± 10 kt. Two samples of chimney rubble totalling over 5,000 lb were obtained during the postshot exploration. These samples of broken granite underwent screen analysis, a radioactivity-distribution study, and cursory leaching tests. The two samples were separated into 25 different size-fractions. An average of the particle-size data from the two samples showed that 17% of the material is between 20 mesh and I in.; 42% between 1 and 6 in.; and 34% between 6 in. and 3 ft. The distribution of radioactivity varies markedly with the particle size. The minus 100-mesh material comprizes less than 1.5% of the weight but contains almost 20% of the radioactivity. Small-scale batch-leaching tests showed that 25% of the radioactivity could be removed in a few hours by a film-percolation leach with distilled water, and 40% with dilute acid. Brief studies were made of the microfractures in the broken rock and of the radioactivity created by the PILEDRIVER explosion. (author)

  1. Self-similar drop-size distributions produced by breakup in chaotic flows

    International Nuclear Information System (INIS)

    Muzzio, F.J.; Tjahjadi, M.; Ottino, J.M.; Department of Chemical Engineering, University of Massachusetts, Amherst, Massachusetts 01003; Department of Chemical Engineering, Northwestern University, Evanston, Illinois 60208)

    1991-01-01

    Deformation and breakup of immiscible fluids in deterministic chaotic flows is governed by self-similar distributions of stretching histories and stretching rates and produces populations of droplets of widely distributed sizes. Scaling reveals that distributions of drop sizes collapse into two self-similar families; each family exhibits a different shape, presumably due to changes in the breakup mechanism

  2. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    Science.gov (United States)

    Gunardi, Setiawan, Ezra Putranda

    2015-12-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  3. Comparing particle-size distributions in modern and ancient sand-bed rivers

    Science.gov (United States)

    Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.

    2011-12-01

    Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical

  4. The Sea-Ice Floe Size Distribution

    Science.gov (United States)

    Stern, H. L., III; Schweiger, A. J. B.; Zhang, J.; Steele, M.

    2017-12-01

    The size distribution of ice floes in the polar seas affects the dynamics and thermodynamics of the ice cover and its interaction with the ocean and atmosphere. Ice-ocean models are now beginning to include the floe size distribution (FSD) in their simulations. In order to characterize seasonal changes of the FSD and provide validation data for our ice-ocean model, we calculated the FSD in the Beaufort and Chukchi seas over two spring-summer-fall seasons (2013 and 2014) using more than 250 cloud-free visible-band scenes from the MODIS sensors on NASA's Terra and Aqua satellites, identifying nearly 250,000 ice floes between 2 and 30 km in diameter. We found that the FSD follows a power-law distribution at all locations, with a seasonally varying exponent that reflects floe break-up in spring, loss of smaller floes in summer, and the return of larger floes after fall freeze-up. We extended the results to floe sizes from 10 m to 2 km at selected time/space locations using more than 50 high-resolution radar and visible-band satellite images. Our analysis used more data and applied greater statistical rigor than any previous study of the FSD. The incorporation of the FSD into our ice-ocean model resulted in reduced sea-ice thickness, mainly in the marginal ice zone, which improved the simulation of sea-ice extent and yielded an earlier ice retreat. We also examined results from 17 previous studies of the FSD, most of which report power-law FSDs but with widely varying exponents. It is difficult to reconcile the range of results due to different study areas, seasons, and methods of analysis. We review the power-law representation of the FSD in these studies and discuss some mathematical details that are important to consider in any future analysis.

  5. A multivariate rank test for comparing mass size distributions

    KAUST Repository

    Lombard, F.; Potgieter, C. J.

    2012-01-01

    Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases

  6. Cometary dust size distributions from flyby spacecraft

    International Nuclear Information System (INIS)

    Divine, N.

    1988-01-01

    Pior to the Halley flybys in 1986, the distribution of cometary dust grains with particle size were approximated using models which provided reasonable fits to the dynamics of dust tails, anti-tails, and infrared spectra. These distributions have since been improved using fluence data (i.e., particle fluxes integrated over time along the flyby trajectory) from three spacecraft. The fluence derived distributions are appropriate for comparison with simultaneous infrared photometry (from Earth) because they sample the particles in the same way as the IR data do (along the line of sight) and because they are directly proportional to the concentration distribution in that region of the coma which dominates the IR emission

  7. Estimation of 1-D velocity models beneath strong-motion observation sites in the Kathmandu Valley using strong-motion records from moderate-sized earthquakes

    Science.gov (United States)

    Bijukchhen, Subeg M.; Takai, Nobuo; Shigefuji, Michiko; Ichiyanagi, Masayoshi; Sasatani, Tsutomu; Sugimura, Yokito

    2017-07-01

    The Himalayan collision zone experiences many seismic activities with large earthquakes occurring at certain time intervals. The damming of the proto-Bagmati River as a result of rapid mountain-building processes created a lake in the Kathmandu Valley that eventually dried out, leaving thick unconsolidated lacustrine deposits. Previous studies have shown that the sediments are 600 m thick in the center. A location in a seismically active region, and the possible amplification of seismic waves due to thick sediments, have made Kathmandu Valley seismically vulnerable. It has suffered devastation due to earthquakes several times in the past. The development of the Kathmandu Valley into the largest urban agglomerate in Nepal has exposed a large population to seismic hazards. This vulnerability was apparent during the Gorkha Earthquake (Mw7.8) on April 25, 2015, when the main shock and ensuing aftershocks claimed more than 1700 lives and nearly 13% of buildings inside the valley were completely damaged. Preparing safe and up-to-date building codes to reduce seismic risk requires a thorough study of ground motion amplification. Characterizing subsurface velocity structure is a step toward achieving that goal. We used the records from an array of strong-motion accelerometers installed by Hokkaido University and Tribhuvan University to construct 1-D velocity models of station sites by forward modeling of low-frequency S-waves. Filtered records (0.1-0.5 Hz) from one of the accelerometers installed at a rock site during a moderate-sized (mb4.9) earthquake on August 30, 2013, and three moderate-sized (Mw5.1, Mw5.1, and Mw5.5) aftershocks of the 2015 Gorkha Earthquake were used as input motion for modeling of low-frequency S-waves. We consulted available geological maps, cross-sections, and borehole data as the basis for initial models for the sediment sites. This study shows that the basin has an undulating topography and sediment sites have deposits of varying thicknesses

  8. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf; Galis, Martin; Mai, Paul Martin

    2017-01-01

    An earthquake's stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values

  9. Measurement of the size distributions of radon progeny in indoor air

    International Nuclear Information System (INIS)

    Hopke, P.K.; Ramamurthi, M.; Li, C.S.

    1990-01-01

    A major problem in evaluating the health risk posed by airborne radon progeny in indoor atmospheres is the lack of available information on the activity-weighted size distributions that occur in the domestic environment. With an automated, semicontinuous, graded screen array system, we made a series of measurements of activity-weighted size distributions in several houses in the northeastern United States. Measurements were made in an unoccupied house, in which human aerosol-generating activities were simulated. The time evolution of the aerosol size distribution was measured in each situation. Results of these measurements are presented

  10. Use of commercial vessels in survey augmentation: the size-frequency distribution

    Directory of Open Access Journals (Sweden)

    Eric N. Powell

    2006-09-01

    Full Text Available The trend towards use of commercial vessels to enhance survey data requires assessment of the advantages and limitations of various options for their use. One application is to augment information on size-frequency distributions obtained in multispecies trawl surveys where stratum boundaries and sampling density are not optimal for all species. Analysis focused on ten recreationally and commercially important species: bluefish, butterfish, Loligo squid, weakfish, summer flounder, winter flounder, silver hake (whiting, black sea bass, striped bass, and scup (porgy. The commercial vessel took 59 tows in the sampled domain south of Long Island, New York and the survey vessel 18. Black sea bass, Loligo squid, and summer flounder demonstrated an onshore-offshore gradient such that smaller fish were caught disproportionately inshore and larger fish offshore. Butterfish, silver hake, and weakfish were characterized by a southwest-northeast gradient such that larger fish were caught disproportionately northeast of the southwestern-most sector. All sizes of scup, striped bass, and bluefish were caught predominately inshore. Winter flounder were caught predominately offshore. The commercial vessel was characterized by an increased frequency of large catches for most species. Consequently, patchiness was assayed to be higher by the commercial vessel in nearly all cases. The size-frequency distribution obtained by the survey vessel for six of the ten species, bluefish, butterfish, Loligo squid, summer flounder, weakfish, and silver hake, could not be obtained by chance from the size-frequency distribution obtained by the commercial vessel. The difference in sample density did not significantly influence the size-frequency distribution. Of the six species characterized by significant differences in size-frequency distribution between boats, all but one was patchy at the population level and all had one or more size classes so characterized. Although the

  11. Simulation of the measure of the microparticle size distribution in two dimensions

    International Nuclear Information System (INIS)

    Lameiras, F.S.; Pinheiro, P.

    1987-01-01

    Different size distributions of plane figures were generated in a computer as a simply connected network. These size distributions were measured by the Saltykov method for two dimensions. The comparison between the generated and measured distributions showed that the Saltkov method tends to measure larger scattering than the real one and to move the maximum of the real distribution to larger diameters. These erros were determined by means of the ratio of the perimeter of the figures per unit area directly measured and the perimeter calculated from the size distribution obtained by using the SaltyKov method. (Author) [pt

  12. Effect of Particle Size Distribution on Slurry Rheology: Nuclear Waste Simulant Slurries

    International Nuclear Information System (INIS)

    Chun, Jaehun; Oh, Takkeun; Luna, Maria L.; Schweiger, Michael J.

    2011-01-01

    Controlling the rheological properties of slurries has been of great interest in various industries such as cosmetics, ceramic processing, and nuclear waste treatment. Many physicochemical parameters, such as particle size, pH, ionic strength, and mass/volume fraction of particles, can influence the rheological properties of slurry. Among such parameters, the particle size distribution of slurry would be especially important for nuclear waste treatment because most nuclear waste slurries show a broad particle size distribution. We studied the rheological properties of several different low activity waste nuclear simulant slurries having different particle size distributions under high salt and high pH conditions. Using rheological and particle size analysis, it was found that the percentage of colloid-sized particles in slurry appears to be a key factor for rheological characteristics and the efficiency of rheological modifiers. This behavior was shown to be coupled with an existing electrostatic interaction between particles under a low salt concentration. Our study suggests that one may need to implement the particle size distribution as a critical factor to understand and control rheological properties in nuclear waste treatment plants, such as the U.S. Department of Energy's Hanford and Savannah River sites, because the particle size distributions significantly vary over different types of nuclear waste slurries.

  13. Bimodal Nanoparticle Size Distributions Produced by Laser Ablation of Microparticles in Aerosols

    International Nuclear Information System (INIS)

    Nichols, William T.; Malyavanatham, Gokul; Henneke, Dale E.; O'Brien, Daniel T.; Becker, Michael F.; Keto, John W.

    2002-01-01

    Silver nanoparticles were produced by laser ablation of a continuously flowing aerosol of microparticles in nitrogen at varying laser fluences. Transmission electron micrographs were analyzed to determine the effect of laser fluence on the nanoparticle size distribution. These distributions exhibited bimodality with a large number of particles in a mode at small sizes (3-6-nm) and a second, less populated mode at larger sizes (11-16-nm). Both modes shifted to larger sizes with increasing laser fluence, with the small size mode shifting by 35% and the larger size mode by 25% over a fluence range of 0.3-4.2-J/cm 2 . Size histograms for each mode were found to be well represented by log-normal distributions. The distribution of mass displayed a striking shift from the large to the small size mode with increasing laser fluence. These results are discussed in terms of a model of nanoparticle formation from two distinct laser-solid interactions. Initially, laser vaporization of material from the surface leads to condensation of nanoparticles in the ambient gas. Material evaporation occurs until the plasma breakdown threshold of the microparticles is reached, generating a shock wave that propagates through the remaining material. Rapid condensation of the vapor in the low-pressure region occurs behind the traveling shock wave. Measurement of particle size distributions versus gas pressure in the ablation region, as well as, versus microparticle feedstock size confirmed the assignment of the larger size mode to surface-vaporization and the smaller size mode to shock-formed nanoparticles

  14. The HayWired Earthquake Scenario—Earthquake Hazards

    Science.gov (United States)

    Detweiler, Shane T.; Wein, Anne M.

    2017-04-24

    The HayWired scenario is a hypothetical earthquake sequence that is being used to better understand hazards for the San Francisco Bay region during and after an earthquake of magnitude 7 on the Hayward Fault. The 2014 Working Group on California Earthquake Probabilities calculated that there is a 33-percent likelihood of a large (magnitude 6.7 or greater) earthquake occurring on the Hayward Fault within three decades. A large Hayward Fault earthquake will produce strong ground shaking, permanent displacement of the Earth’s surface, landslides, liquefaction (soils becoming liquid-like during shaking), and subsequent fault slip, known as afterslip, and earthquakes, known as aftershocks. The most recent large earthquake on the Hayward Fault occurred on October 21, 1868, and it ruptured the southern part of the fault. The 1868 magnitude-6.8 earthquake occurred when the San Francisco Bay region had far fewer people, buildings, and infrastructure (roads, communication lines, and utilities) than it does today, yet the strong ground shaking from the earthquake still caused significant building damage and loss of life. The next large Hayward Fault earthquake is anticipated to affect thousands of structures and disrupt the lives of millions of people. Earthquake risk in the San Francisco Bay region has been greatly reduced as a result of previous concerted efforts; for example, tens of billions of dollars of investment in strengthening infrastructure was motivated in large part by the 1989 magnitude 6.9 Loma Prieta earthquake. To build on efforts to reduce earthquake risk in the San Francisco Bay region, the HayWired earthquake scenario comprehensively examines the earthquake hazards to help provide the crucial scientific information that the San Francisco Bay region can use to prepare for the next large earthquake, The HayWired Earthquake Scenario—Earthquake Hazards volume describes the strong ground shaking modeled in the scenario and the hazardous movements of

  15. Napa Earthquake impact on water systems

    Science.gov (United States)

    Wang, J.

    2014-12-01

    South Napa earthquake occurred in Napa, California on August 24 at 3am, local time, and the magnitude is 6.0. The earthquake was the largest in SF Bay Area since the 1989 Loma Prieta earthquake. Economic loss topped $ 1 billion. Wine makers cleaning up and estimated the damage on tourism. Around 15,000 cases of lovely cabernet were pouring into the garden at the Hess Collection. Earthquake potentially raise water pollution risks, could cause water crisis. CA suffered water shortage recent years, and it could be helpful on how to prevent underground/surface water pollution from earthquake. This research gives a clear view on drinking water system in CA, pollution on river systems, as well as estimation on earthquake impact on water supply. The Sacramento-San Joaquin River delta (close to Napa), is the center of the state's water distribution system, delivering fresh water to more than 25 million residents and 3 million acres of farmland. Delta water conveyed through a network of levees is crucial to Southern California. The drought has significantly curtailed water export, and salt water intrusion reduced fresh water outflows. Strong shaking from a nearby earthquake can cause saturated, loose, sandy soils liquefaction, and could potentially damage major delta levee systems near Napa. Napa earthquake is a wake-up call for Southern California. It could potentially damage freshwater supply system.

  16. Austenite Grain Size Estimtion from Chord Lengths of Logarithmic-Normal Distribution

    Directory of Open Access Journals (Sweden)

    Adrian H.

    2017-12-01

    Full Text Available Linear section of grains in polyhedral material microstructure is a system of chords. The mean length of chords is the linear grain size of the microstructure. For the prior austenite grains of low alloy structural steels, the chord length is a random variable of gamma- or logarithmic-normal distribution. The statistical grain size estimation belongs to the quantitative metallographic problems. The so-called point estimation is a well known procedure. The interval estimation (grain size confidence interval for the gamma distribution was given elsewhere, but for the logarithmic-normal distribution is the subject of the present contribution. The statistical analysis is analogous to the one for the gamma distribution.

  17. Probabilistic tsunami hazard assessment based on the long-term evaluation of subduction-zone earthquakes along the Sagami Trough, Japan

    Science.gov (United States)

    Hirata, K.; Fujiwara, H.; Nakamura, H.; Osada, M.; Ohsumi, T.; Morikawa, N.; Kawai, S.; Maeda, T.; Matsuyama, H.; Toyama, N.; Kito, T.; Murata, Y.; Saito, R.; Takayama, J.; Akiyama, S.; Korenaga, M.; Abe, Y.; Hashimoto, N.; Hakamata, T.

    2017-12-01

    For the forthcoming large earthquakes along the Sagami Trough where the Philippine Sea Plate is subducting beneath the northeast Japan arc, the Earthquake Research Committee(ERC) /Headquarters for Earthquake Research Promotion, Japanese government (2014a) assessed that M7 and M8 class earthquakes will occur there and defined the possible extent of the earthquake source areas. They assessed 70% and 0% 5% of the occurrence probability within the next 30 years (from Jan. 1, 2014), respectively, for the M7 and M8 class earthquakes. First, we set possible 10 earthquake source areas(ESAs) and 920 ESAs, respectively, for M8 and M7 class earthquakes. Next, we constructed 125 characterized earthquake fault models (CEFMs) and 938 CEFMs, respectively, for M8 and M7 class earthquakes, based on "tsunami receipt" of ERC (2017) (Kitoh et al., 2016, JpGU). All the CEFMs are allowed to have a large slip area for expression of fault slip heterogeneity. For all the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. Finally, we re-distributed the occurrence probability to all CEFMs (Abe et al., 2014, JpGU) and gathered excess probabilities for variable tsunami heights, calculated from all the CEFMs, at every observation point along Pacific coast to get PTHA. We incorporated aleatory uncertainties inherent in tsunami calculation and earthquake fault slip heterogeneity. We considered two kinds of probabilistic hazard models; one is "Present-time hazard model" under an assumption that the earthquake occurrence basically follows a renewal process based on BPT distribution if the latest faulting time was known. The other is "Long-time averaged hazard model" under an assumption that earthquake occurrence follows a stationary Poisson process. We fixed our viewpoint, for example, on the probability that the tsunami height will exceed 3 meters at coastal points in next

  18. Prompt Assessment of Global Earthquakes for Response (PAGER): A System for Rapidly Determining the Impact of Earthquakes Worldwide

    Science.gov (United States)

    Earle, Paul S.; Wald, David J.; Jaiswal, Kishor S.; Allen, Trevor I.; Hearne, Michael G.; Marano, Kristin D.; Hotovec, Alicia J.; Fee, Jeremy

    2009-01-01

    Within minutes of a significant earthquake anywhere on the globe, the U.S. Geological Survey (USGS) Prompt Assessment of Global Earthquakes for Response (PAGER) system assesses its potential societal impact. PAGER automatically estimates the number of people exposed to severe ground shaking and the shaking intensity at affected cities. Accompanying maps of the epicentral region show the population distribution and estimated ground-shaking intensity. A regionally specific comment describes the inferred vulnerability of the regional building inventory and, when available, lists recent nearby earthquakes and their effects. PAGER's results are posted on the USGS Earthquake Program Web site (http://earthquake.usgs.gov/), consolidated in a concise one-page report, and sent in near real-time to emergency responders, government agencies, and the media. Both rapid and accurate results are obtained through manual and automatic updates of PAGER's content in the hours following significant earthquakes. These updates incorporate the most recent estimates of earthquake location, magnitude, faulting geometry, and first-hand accounts of shaking. PAGER relies on a rich set of earthquake analysis and assessment tools operated by the USGS and contributing Advanced National Seismic System (ANSS) regional networks. A focused research effort is underway to extend PAGER's near real-time capabilities beyond population exposure to quantitative estimates of fatalities, injuries, and displaced population.

  19. Size distribution of magnetic iron oxide nanoparticles using Warren-Averbach XRD analysis

    Science.gov (United States)

    Mahadevan, S.; Behera, S. P.; Gnanaprakash, G.; Jayakumar, T.; Philip, J.; Rao, B. P. C.

    2012-07-01

    We use the Fourier transform based Warren-Averbach (WA) analysis to separate the contributions of X-ray diffraction (XRD) profile broadening due to crystallite size and microstrain for magnetic iron oxide nanoparticles. The profile shape of the column length distribution, obtained from WA analysis, is used to analyze the shape of the magnetic iron oxide nanoparticles. From the column length distribution, the crystallite size and its distribution are estimated for these nanoparticles which are compared with size distribution obtained from dynamic light scattering measurements. The crystallite size and size distribution of crystallites obtained from WA analysis are explained based on the experimental parameters employed in preparation of these magnetic iron oxide nanoparticles. The variation of volume weighted diameter (Dv, from WA analysis) with saturation magnetization (Ms) fits well to a core shell model wherein it is known that Ms=Mbulk(1-6g/Dv) with Mbulk as bulk magnetization of iron oxide and g as magnetic shell disorder thickness.

  20. Fragment size distribution in viscous bag breakup of a drop

    Science.gov (United States)

    Kulkarni, Varun; Bulusu, Kartik V.; Plesniak, Michael W.; Sojka, Paul E.

    2015-11-01

    In this study we examine the drop size distribution resulting from the fragmentation of a single drop in the presence of a continuous air jet. Specifically, we study the effect of Weber number, We, and Ohnesorge number, Oh on the disintegration process. The regime of breakup considered is observed between 12 phase Doppler anemometry. Both the number and volume fragment size probability distributions are plotted. The volume probability distribution revealed a bi-modal behavior with two distinct peaks: one corresponding to the rim fragments and the other to the bag fragments. This behavior was suppressed in the number probability distribution. Additionally, we employ an in-house particle detection code to isolate the rim fragment size distribution from the total probability distributions. Our experiments showed that the bag fragments are smaller in diameter and larger in number, while the rim fragments are larger in diameter and smaller in number. Furthermore, with increasing We for a given Ohwe observe a large number of small-diameter drops and small number of large-diameter drops. On the other hand, with increasing Oh for a fixed We the opposite is seen.

  1. The USGS Earthquake Notification Service (ENS): Customizable notifications of earthquakes around the globe

    Science.gov (United States)

    Wald, Lisa A.; Wald, David J.; Schwarz, Stan; Presgrave, Bruce; Earle, Paul S.; Martinez, Eric; Oppenheimer, David

    2008-01-01

    At the beginning of 2006, the U.S. Geological Survey (USGS) Earthquake Hazards Program (EHP) introduced a new automated Earthquake Notification Service (ENS) to take the place of the National Earthquake Information Center (NEIC) "Bigquake" system and the various other individual EHP e-mail list-servers for separate regions in the United States. These included northern California, southern California, and the central and eastern United States. ENS is a "one-stop shopping" system that allows Internet users to subscribe to flexible and customizable notifications for earthquakes anywhere in the world. The customization capability allows users to define the what (magnitude threshold), the when (day and night thresholds), and the where (specific regions) for their notifications. Customization is achieved by employing a per-user based request profile, allowing the notifications to be tailored for each individual's requirements. Such earthquake-parameter-specific custom delivery was not possible with simple e-mail list-servers. Now that event and user profiles are in a structured query language (SQL) database, additional flexibility is possible. At the time of this writing, ENS had more than 114,000 subscribers, with more than 200,000 separate user profiles. On a typical day, more than 188,000 messages get sent to a variety of widely distributed users for a wide range of earthquake locations and magnitudes. The purpose of this article is to describe how ENS works, highlight the features it offers, and summarize plans for future developments.

  2. Size Distribution Imaging by Non-Uniform Oscillating-Gradient Spin Echo (NOGSE MRI.

    Directory of Open Access Journals (Sweden)

    Noam Shemesh

    Full Text Available Objects making up complex porous systems in Nature usually span a range of sizes. These size distributions play fundamental roles in defining the physicochemical, biophysical and physiological properties of a wide variety of systems - ranging from advanced catalytic materials to Central Nervous System diseases. Accurate and noninvasive measurements of size distributions in opaque, three-dimensional objects, have thus remained long-standing and important challenges. Herein we describe how a recently introduced diffusion-based magnetic resonance methodology, Non-Uniform-Oscillating-Gradient-Spin-Echo (NOGSE, can determine such distributions noninvasively. The method relies on its ability to probe confining lengths with a (length6 parametric sensitivity, in a constant-time, constant-number-of-gradients fashion; combined, these attributes provide sufficient sensitivity for characterizing the underlying distributions in μm-scaled cellular systems. Theoretical derivations and simulations are presented to verify NOGSE's ability to faithfully reconstruct size distributions through suitable modeling of their distribution parameters. Experiments in yeast cell suspensions - where the ground truth can be determined from ancillary microscopy - corroborate these trends experimentally. Finally, by appending to the NOGSE protocol an imaging acquisition, novel MRI maps of cellular size distributions were collected from a mouse brain. The ensuing micro-architectural contrasts successfully delineated distinctive hallmark anatomical sub-structures, in both white matter and gray matter tissues, in a non-invasive manner. Such findings highlight NOGSE's potential for characterizing aberrations in cellular size distributions upon disease, or during normal processes such as development.

  3. Global patterns of city size distributions and their fundamental drivers.

    Directory of Open Access Journals (Sweden)

    Ethan H Decker

    2007-09-01

    Full Text Available Urban areas and their voracious appetites are increasingly dominating the flows of energy and materials around the globe. Understanding the size distribution and dynamics of urban areas is vital if we are to manage their growth and mitigate their negative impacts on global ecosystems. For over 50 years, city size distributions have been assumed to universally follow a power function, and many theories have been put forth to explain what has become known as Zipf's law (the instance where the exponent of the power function equals unity. Most previous studies, however, only include the largest cities that comprise the tail of the distribution. Here we show that national, regional and continental city size distributions, whether based on census data or inferred from cluster areas of remotely-sensed nighttime lights, are in fact lognormally distributed through the majority of cities and only approach power functions for the largest cities in the distribution tails. To explore generating processes, we use a simple model incorporating only two basic human dynamics, migration and reproduction, that nonetheless generates distributions very similar to those found empirically. Our results suggest that macroscopic patterns of human settlements may be far more constrained by fundamental ecological principles than more fine-scale socioeconomic factors.

  4. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  5. Probabilistic Models For Earthquakes With Large Return Periods In Himalaya Region

    Science.gov (United States)

    Chaudhary, Chhavi; Sharma, Mukat Lal

    2017-12-01

    Determination of the frequency of large earthquakes is of paramount importance for seismic risk assessment as large events contribute to significant fraction of the total deformation and these long return period events with low probability of occurrence are not easily captured by classical distributions. Generally, with a small catalogue these larger events follow different distribution function from the smaller and intermediate events. It is thus of special importance to use statistical methods that analyse as closely as possible the range of its extreme values or the tail of the distributions in addition to the main distributions. The generalised Pareto distribution family is widely used for modelling the events which are crossing a specified threshold value. The Pareto, Truncated Pareto, and Tapered Pareto are the special cases of the generalised Pareto family. In this work, the probability of earthquake occurrence has been estimated using the Pareto, Truncated Pareto, and Tapered Pareto distributions. As a case study, the Himalayas whose orogeny lies in generation of large earthquakes and which is one of the most active zones of the world, has been considered. The whole Himalayan region has been divided into five seismic source zones according to seismotectonic and clustering of events. Estimated probabilities of occurrence of earthquakes have also been compared with the modified Gutenberg-Richter distribution and the characteristics recurrence distribution. The statistical analysis reveals that the Tapered Pareto distribution better describes seismicity for the seismic source zones in comparison to other distributions considered in the present study.

  6. Remote Laser Diffraction Particle Size Distribution Analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Batcheller, Thomas Aquinas; Huestis, Gary Michael; Bolton, Steven Michael

    2001-03-01

    In support of a radioactive slurry sampling and physical characterization task, an “off-the-shelf” laser diffraction (classical light scattering) particle size analyzer was utilized for remote particle size distribution (PSD) analysis. Spent nuclear fuel was previously reprocessed at the Idaho Nuclear Technology and Engineering Center (INTEC—formerly recognized as the Idaho Chemical Processing Plant) which is on DOE’s INEEL site. The acidic, radioactive aqueous raffinate streams from these processes were transferred to 300,000 gallon stainless steel storage vessels located in the INTEC Tank Farm area. Due to the transfer piping configuration in these vessels, complete removal of the liquid can not be achieved. Consequently, a “heel” slurry remains at the bottom of an “emptied” vessel. Particle size distribution characterization of the settled solids in this remaining heel slurry, as well as suspended solids in the tank liquid, is the goal of this remote PSD analyzer task. A Horiba Instruments Inc. Model LA-300 PSD analyzer, which has a 0.1 to 600 micron measurement range, was modified for remote application in a “hot cell” (gamma radiation) environment. This technology provides rapid and simple PSD analysis, especially down in the fine and microscopic particle size regime. Particle size analysis of these radioactive slurries down in this smaller range was not previously achievable—making this technology far superior than the traditional methods used. Successful acquisition of this data, in conjunction with other characterization analyses, provides important information that can be used in the myriad of potential radioactive waste management alternatives.

  7. Some regularity of the grain size distribution in nuclear fuel with controllable structure

    International Nuclear Information System (INIS)

    Loktev, Igor

    2008-01-01

    It is known, the fission gas release from ceramic nuclear fuel depends from average size of grains. To increase grain size they use additives which activate sintering of pellets. However, grain size distribution influences on fission gas release also. Fuel with different structures, but with the same average size of grains has different fission gas release. Other structure elements, which influence operational behavior of fuel, are pores and inclusions. Earlier, in Kyoto, questions of distribution of grain size for fuel with 'natural' structure were discussed. Some regularity of grain size distribution of fuel with controllable structure and high average size of grains are considered in the report. Influence of inclusions and pores on an error of the automated definition of parameters of structure is shown. The criterion, which describe of behavior of fuel with specific grain size distribution, is offered

  8. Ergodicity and Phase Transitions and Their Implications for Earthquake Forecasting.

    Science.gov (United States)

    Klein, W.

    2017-12-01

    Forecasting earthquakes or even predicting the statistical distribution of events on a given fault is extremely difficult. One reason for this difficulty is the large number of fault characteristics that can affect the distribution and timing of events. The range of stress transfer, the level of noise, and the nature of the friction force all influence the type of the events and the values of these parameters can vary from fault to fault and also vary with time. In addition, the geometrical structure of the faults and the correlation of events on different faults plays an important role in determining the event size and their distribution. Another reason for the difficulty is that the important fault characteristics are not easily measured. The noise level, fault structure, stress transfer range, and the nature of the friction force are extremely difficult, if not impossible to ascertain. Given this lack of information, one of the most useful approaches to understanding the effect of fault characteristics and the way they interact is to develop and investigate models of faults and fault systems.In this talk I will present results obtained from a series of models of varying abstraction and compare them with data from actual faults. We are able to provide a physical basis for several observed phenomena such as the earthquake cycle, thefact that some faults display Gutenburg-Richter scaling and others do not, and that some faults exhibit quasi-periodic characteristic events and others do not. I will also discuss some surprising results such as the fact that some faults are in thermodynamic equilibrium depending on the stress transfer range and the noise level. An example of an important conclusion that can be drawn from this work is that the statistical distribution of earthquake events can vary from fault to fault and that an indication of an impending large event such as accelerating moment release may be relevant on some faults but not on others.

  9. Growing axons analysis by using Granulometric Size Distribution

    International Nuclear Information System (INIS)

    Gonzalez, Mariela A; Ballarin, Virginia L; Rapacioli, Melina; CelIn, A R; Sanchez, V; Flores, V

    2011-01-01

    Neurite growth (neuritogenesis) in vitro is a common methodology in the field of developmental neurobiology. Morphological analyses of growing neurites are usually difficult because their thinness and low contrast usually prevent to observe clearly their shape, number, length and spatial orientation. This paper presents the use of the granulometric size distribution in order to automatically obtain information about the shape, size and spatial orientation of growing axons in tissue cultures. The results here presented show that the granulometric size distribution results in a very useful morphological tool since it allows the automatic detection of growing axons and the precise characterization of a relevant parameter indicative of the axonal growth spatial orientation such as the quantification of the angle of deviation of the growing direction. The developed algorithms automatically quantify this orientation by facilitating the analysis of these images, which is important given the large number of images that need to be processed for this type of study.

  10. Size distribution and structure of Barchan dune fields

    Directory of Open Access Journals (Sweden)

    O. Durán

    2011-07-01

    Full Text Available Barchans are isolated mobile dunes often organized in large dune fields. Dune fields seem to present a characteristic dune size and spacing, which suggests a cooperative behavior based on dune interaction. In Duran et al. (2009, we propose that the redistribution of sand by collisions between dunes is a key element for the stability and size selection of barchan dune fields. This approach was based on a mean-field model ignoring the spatial distribution of dune fields. Here, we present a simplified dune field model that includes the spatial evolution of individual dunes as well as their interaction through sand exchange and binary collisions. As a result, the dune field evolves towards a steady state that depends on the boundary conditions. Comparing our results with measurements of Moroccan dune fields, we find that the simulated fields have the same dune size distribution as in real fields but fail to reproduce their homogeneity along the wind direction.

  11. The size distribution of dissolved uranium in natural waters

    International Nuclear Information System (INIS)

    Mann, D.K.; Wong, G.T.F.

    1987-01-01

    The size distribution of dissolved uranium in natural waters is poorly known. Some fraction of dissolved uranium is known to associate with organic matter which had a wide range of molecular weights. The presence of inorganic colloidal uranium has not been reported. Ultrafiltration has been used to quantify the size distribution of a number of elements, such as dissolved organic carbon, selenium, and some trace metals, in both the organic and/or the inorganic forms. The authors have applied this technique to dissolved uranium and the data are reported here

  12. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment

  13. The Differences in Source Dynamics Between Intermediate-Depth and Deep EARTHQUAKES:A Comparative Study Between the 2014 Rat Islands Intermediate-Depth Earthquake and the 2015 Bonin Islands Deep Earthquake

    Science.gov (United States)

    Twardzik, C.; Ji, C.

    2015-12-01

    It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us

  14. Are range-size distributions consistent with species-level heritability?

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Gotelli, Nicholas; Rahbek, Carsten

    2012-01-01

    The concept of species-level heritability is widely contested. Because it is most likely to apply to emergent, species-level traits, one of the central discussions has focused on the potential heritability of geographic range size. However, a central argument against range-size heritability has...... been that it is not compatible with the observed shape of present-day species range-size distributions (SRDs), a claim that has never been tested. To assess this claim, we used forward simulation of range-size evolution in clades with varying degrees of range-size heritability, and compared the output...

  15. Preliminary report on Petatlan, Mexico: earthquake of 14 March 1979

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    A major earthquake, M/sub s/ = 7.6, occurred off the southern coast of Mexico near the town of Petatlan on 14 March 1979. The earthquake ruptured a 50-km-long section of the Middle American subduction zone, a seismic gap last ruptured by a major earthquake (M/sub s/ = 7.5) in 1943. Since adjacent gaps of approximately the same size have not had a large earthquake since 1911, and one of these suffered three major earthquakes in four years (1907, 1909, 1911), recurrence times for large events here are highly variable. Thus, this general area remains one of high seismic risk, and provides a focus for investigation of segmentation in the subduction processes. 2 figures.

  16. Test of Poisson Process for Earthquakes in and around Korea

    International Nuclear Information System (INIS)

    Noh, Myunghyun; Choi, Hoseon

    2015-01-01

    Since Cornell's work on the probabilistic seismic hazard analysis (hereafter, PSHA), majority of PSHA computer codes are assuming that the earthquake occurrence is Poissonian. To the author's knowledge, it is uncertain who first opened the issue of the Poisson process for the earthquake occurrence. The systematic PSHA in Korea, led by the nuclear industry, were carried out for more than 25 year with the assumption of the Poisson process. However, the assumption of the Poisson process has never been tested. Therefore, the test is of significance. We tested whether the Korean earthquakes follow the Poisson process or not. The Chi-square test with the significance level of 5% was applied. The test turned out that the Poisson process could not be rejected for the earthquakes of magnitude 2.9 or larger. However, it was still observed in the graphical comparison that some portion of the observed distribution significantly deviated from the Poisson distribution. We think this is due to the small earthquake data. The earthquakes of magnitude 2.9 or larger occurred only 376 times during 34 years. Therefore, the judgment on the Poisson process derived in the present study is not conclusive

  17. New characteristics of intensity assessment of Sichuan Lushan "4.20" M s7.0 earthquake

    Science.gov (United States)

    Sun, Baitao; Yan, Peilei; Chen, Xiangzhao

    2014-08-01

    The post-earthquake rapid accurate assessment of macro influence of seismic ground motion is of significance for earthquake emergency relief, post-earthquake reconstruction and scientific research. The seismic intensity distribution map released by the Lushan earthquake field team of the China Earthquake Administration (CEA) five days after the strong earthquake ( M7.0) occurred in Lushan County of Sichuan Ya'an City at 8:02 on April 20, 2013 provides a scientific basis for emergency relief, economic loss assessment and post-earthquake reconstruction. In this paper, the means for blind estimation of macroscopic intensity, field estimation of macro intensity, and review of intensity, as well as corresponding problems are discussed in detail, and the intensity distribution characteristics of the Lushan "4.20" M7.0 earthquake and its influential factors are analyzed, providing a reference for future seismic intensity assessments.

  18. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios

    Directory of Open Access Journals (Sweden)

    A. Muhammad

    2017-12-01

    Full Text Available This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0 that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan – including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal–vertical evacuation time maps – has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events.

  19. Spatial Evaluation and Verification of Earthquake Simulators

    Science.gov (United States)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  20. Induced seismicity provides insight into why earthquake ruptures stop

    KAUST Repository

    Galis, Martin

    2017-12-21

    Injection-induced earthquakes pose a serious seismic hazard but also offer an opportunity to gain insight into earthquake physics. Currently used models relating the maximum magnitude of injection-induced earthquakes to injection parameters do not incorporate rupture physics. We develop theoretical estimates, validated by simulations, of the size of ruptures induced by localized pore-pressure perturbations and propagating on prestressed faults. Our model accounts for ruptures growing beyond the perturbed area and distinguishes self-arrested from runaway ruptures. We develop a theoretical scaling relation between the largest magnitude of self-arrested earthquakes and the injected volume and find it consistent with observed maximum magnitudes of injection-induced earthquakes over a broad range of injected volumes, suggesting that, although runaway ruptures are possible, most injection-induced events so far have been self-arrested ruptures.

  1. Notes on representing grain size distributions obtained by electron backscatter diffraction

    International Nuclear Information System (INIS)

    Toth, Laszlo S.; Biswas, Somjeet; Gu, Chengfan; Beausir, Benoit

    2013-01-01

    Grain size distributions measured by electron backscatter diffraction are commonly represented by histograms using either number or area fraction definitions. It is shown here that they should be presented in forms of density distribution functions for direct quantitative comparisons between different measurements. Here we make an interpretation of the frequently seen parabolic tales of the area distributions of bimodal grain structures and a transformation formula between the two distributions are given in this paper. - Highlights: • Grain size distributions are represented by density functions. • The parabolic tales corresponds to equal number of grains in a bin of the histogram. • A simple transformation formula is given to number and area weighed distributions. • The particularities of uniform and lognormal distributions are examined

  2. US earthquake observatories: recommendations for a new national network

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    This report is the first attempt by the seismological community to rationalize and optimize the distribution of earthquake observatories across the United States. The main aim is to increase significantly our knowledge of earthquakes and the earth's dynamics by providing access to scientifically more valuable data. Other objectives are to provide a more efficient and cost-effective system of recording and distributing earthquake data and to make as uniform as possible the recording of earthquakes in all states. The central recommendation of the Panel is that the guiding concept be established of a rationalized and integrated seismograph system consisting of regional seismograph networks run for crucial regional research and monitoring purposes in tandem with a carefully designed, but sparser, nationwide network of technologically advanced observatories. Such a national system must be thought of not only in terms of instrumentation but equally in terms of data storage, computer processing, and record availability.

  3. Short- and Long-Term Earthquake Forecasts Based on Statistical Models

    Science.gov (United States)

    Console, Rodolfo; Taroni, Matteo; Murru, Maura; Falcone, Giuseppe; Marzocchi, Warner

    2017-04-01

    The epidemic-type aftershock sequences (ETAS) models have been experimentally used to forecast the space-time earthquake occurrence rate during the sequence that followed the 2009 L'Aquila earthquake and for the 2012 Emilia earthquake sequence. These forecasts represented the two first pioneering attempts to check the feasibility of providing operational earthquake forecasting (OEF) in Italy. After the 2009 L'Aquila earthquake the Italian Department of Civil Protection nominated an International Commission on Earthquake Forecasting (ICEF) for the development of the first official OEF in Italy that was implemented for testing purposes by the newly established "Centro di Pericolosità Sismica" (CPS, the seismic Hazard Center) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). According to the ICEF guidelines, the system is open, transparent, reproducible and testable. The scientific information delivered by OEF-Italy is shaped in different formats according to the interested stakeholders, such as scientists, national and regional authorities, and the general public. The communication to people is certainly the most challenging issue, and careful pilot tests are necessary to check the effectiveness of the communication strategy, before opening the information to the public. With regard to long-term time-dependent earthquake forecast, the application of a newly developed simulation algorithm to Calabria region provided typical features in time, space and magnitude behaviour of the seismicity, which can be compared with those of the real observations. These features include long-term pseudo-periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the Gutenberg-Richter distribution in the moderate and higher magnitude range.

  4. The magnetized sheath of a dusty plasma with grains size distribution

    International Nuclear Information System (INIS)

    Ou, Jing; Gan, Chunyun; Lin, Binbin; Yang, Jinhong

    2015-01-01

    The structure of a plasma sheath in the presence of dust grains size distribution (DGSD) is investigated in the multi-fluid framework. It is shown that effect of the dust grains with different sizes on the sheath structure is a collective behavior. The spatial distributions of electric potential, the electron and ion densities and velocities, and the dust grains surface potential are strongly affected by DGSD. The dynamics of dust grains with different sizes in the sheath depend on not only DGSD but also their radius. By comparison of the sheath structure, it is found that under the same expected value of DGSD condition, the sheath length is longer in the case of lognormal distribution than that in the case of uniform distribution. In two cases of normal and lognormal distributions, the sheath length is almost equal for the small variance of DGSD, and then the difference of sheath length increases gradually with increase in the variance

  5. The effects of particle size distribution and induced unpinning during grain growth

    International Nuclear Information System (INIS)

    Thompson, G.S.; Rickman, J.M.; Harmer, M.P.; Holm, E.A.

    1996-01-01

    The effect of a second-phase particle size distribution on grain boundary pinning was studied using a Monte Carlo simulation technique. Simulations were run using a constant number density of both whisker and rhombohedral particles, and the effect of size distribution was studied by varying the standard deviation of the distribution around a constant mean particle size. The results of present simulations indicate that, in accordance with the stereological assumption of the topological pinning model, changes in distribution width had no effect on the pinned grain size. The effect of induced unpinning of particles on microstructure was also studied. In contrast to predictions of the topological pinning model, a power law dependence of pinned grain size on particle size was observed at T=0.0. Based on this, a systematic deviation to the stereological predictions of the topological pinning model is observed. The results of simulations at higher temperatures indicate an increasing power law dependence of pinned grain size on particle size, with the slopes of the power law dependencies fitting an Arrhenius relation. The effect of induced unpinning of particles was also studied in order to obtain a correlation between particle/boundary concentration and equilibrium grain size. The results of simulations containing a constant number density of monosized rhombohedral particles suggest a strong power law correlation between the two parameters. copyright 1996 Materials Research Society

  6. Seismic gaps previous to certain great earthquakes occurring in North China

    Energy Technology Data Exchange (ETDEWEB)

    Wei, K H; Lin, C H; Chu, H C; Chao, Y H; Chao, H L; Hou, H F

    1978-07-01

    The epicentral distributions of small and moderate earthquakes preceding nine great earthquakes (M greater than or equal to 7.0) in North China are analyzed. It can be seen that most of these earthquakes are preceded by gaps in the regions surrounding their epicenters. The relations between the parameters of the seismic gaps, such as the lengths of their long and short axes, the areas of the gaps, etc., and the parameters of the corresponding earthquakes are discussed.

  7. Earthquake imprints on a lacustrine deltaic system: The Kürk Delta along the East Anatolian Fault (Turkey)

    KAUST Repository

    Hubert-Ferrari, Auré lia; El-Ouahabi, Meriam; Garcia-Moreno, David; Avsar, Ulas; Altınok, Sevgi; Schmidt, Sabine; Fagel, Nathalie; Ç ağatay, Namık

    2017-01-01

    Deltas contain sedimentary records that are not only indicative of water-level changes, but also particularly sensitive to earthquake shaking typically resulting in soft-sediment-deformation structures. The Kürk lacustrine delta lies at the south-western extremity of Lake Hazar in eastern Turkey and is adjacent to the seismogenic East Anatolian Fault, which has generated earthquakes of magnitude 7. This study re-evaluates water-level changes and earthquake shaking that have affected the Kürk Delta, combining geophysical data (seismic-reflection profiles and side-scan sonar), remote sensing images, historical data, onland outcrops and offshore coring. The history of water-level changes provides a temporal framework for the depositional record. In addition to the common soft-sediment deformation documented previously, onland outcrops reveal a record of deformation (fracturing, tilt and clastic dykes) linked to large earthquake-induced liquefactions and lateral spreading. The recurrent liquefaction structures can be used to obtain a palaeoseismological record. Five event horizons were identified that could be linked to historical earthquakes occurring in the last 1000 years along the East Anatolian Fault. Sedimentary cores sampling the most recent subaqueous sedimentation revealed the occurrence of another type of earthquake indicator. Based on radionuclide dating (Cs and Pb), two major sedimentary events were attributed to the ad 1874 to 1875 East Anatolian Fault earthquake sequence. Their sedimentological characteristics were determined by X-ray imagery, X-ray diffraction, loss-on-ignition, grain-size distribution and geophysical measurements. The events are interpreted to be hyperpycnal deposits linked to post-seismic sediment reworking of earthquake-triggered landslides.

  8. Earthquake imprints on a lacustrine deltaic system: The Kürk Delta along the East Anatolian Fault (Turkey)

    KAUST Repository

    Hubert-Ferrari, Aurélia

    2017-01-05

    Deltas contain sedimentary records that are not only indicative of water-level changes, but also particularly sensitive to earthquake shaking typically resulting in soft-sediment-deformation structures. The Kürk lacustrine delta lies at the south-western extremity of Lake Hazar in eastern Turkey and is adjacent to the seismogenic East Anatolian Fault, which has generated earthquakes of magnitude 7. This study re-evaluates water-level changes and earthquake shaking that have affected the Kürk Delta, combining geophysical data (seismic-reflection profiles and side-scan sonar), remote sensing images, historical data, onland outcrops and offshore coring. The history of water-level changes provides a temporal framework for the depositional record. In addition to the common soft-sediment deformation documented previously, onland outcrops reveal a record of deformation (fracturing, tilt and clastic dykes) linked to large earthquake-induced liquefactions and lateral spreading. The recurrent liquefaction structures can be used to obtain a palaeoseismological record. Five event horizons were identified that could be linked to historical earthquakes occurring in the last 1000 years along the East Anatolian Fault. Sedimentary cores sampling the most recent subaqueous sedimentation revealed the occurrence of another type of earthquake indicator. Based on radionuclide dating (Cs and Pb), two major sedimentary events were attributed to the ad 1874 to 1875 East Anatolian Fault earthquake sequence. Their sedimentological characteristics were determined by X-ray imagery, X-ray diffraction, loss-on-ignition, grain-size distribution and geophysical measurements. The events are interpreted to be hyperpycnal deposits linked to post-seismic sediment reworking of earthquake-triggered landslides.

  9. Dependence of size and size distribution on reactivity of aluminum nanoparticles in reactions with oxygen and MoO3

    International Nuclear Information System (INIS)

    Sun, Juan; Pantoya, Michelle L.; Simon, Sindee L.

    2006-01-01

    The oxidation reaction of aluminum nanoparticles with oxygen gas and the thermal behavior of a metastable intermolecular composite (MIC) composed of the aluminum nanoparticles and molybdenum trioxide are studied with differential scanning calorimetry (DSC) as a function of the size and size distribution of the aluminum particles. Both broad and narrow size distributions have been investigated with aluminum particle sizes ranging from 30 to 160 nm; comparisons are also made to the behavior of micrometer-size particles. Several parameters have been used to characterize the reactivity of aluminum nanoparticles, including the fraction of aluminum that reacts prior to aluminum melting, heat of reaction, onset and peak temperatures, and maximum reaction rates. The results indicate that the reactivity of aluminum nanoparticles is significantly higher than that of the micrometer-size samples, but depending on the measure of reactivity, it may also depend strongly on the size distribution. The isoconversional method was used to calculate the apparent activation energy, and the values obtained for both the Al/O 2 and Al/MoO 3 reaction are in the range of 200-300 kJ/mol

  10. Future Developments for the Earthquake Early Warning System following the 2011 off the Pacific Coast of Tohoku Earthquake

    Science.gov (United States)

    Yamada, M.; Mori, J. J.

    2011-12-01

    The 2011 off the Pacific Coast of Tohoku Earthquake (Mw9.0) caused significant damage over a large area of northeastern Honshu. An earthquake early warning was issued to the public in the Tohoku region about 8 seconds after the first P-arrival, which is 31 seconds after the origin time. There was no 'blind zone', and warnings were received at all locations before S-wave arrivals, since the earthquake was fairly far offshore. Although the early warning message was properly reported in Tohoku region which was the most severely affected area, a message was not sent to the more distant Tokyo region because the intensity was underestimated. . This underestimation was because the magnitude determination in the first few seconds was relatively small (Mj8.1)., and there was no consideration of a finite fault with a long length. Another significant issue is that warnings were sometimes not properly provided for aftershocks. Immediately following the earthquake, the waveforms of some large aftershocks were contaminated by long-period surface waves from the mainshock, which made it difficult to pick P-wave arrivals. Also, correctly distinguishing and locating later aftershocks was sometimes difficult, when multiple events occurred within a short period of time. This masinhock begins with relatively small moment release for the first 10 s . Since the amplitude of the initial waveforms is small, most methods that use amplitudes and periods of the P-wave (e.g. Wu and Kanamori, 2005) cannot correctly determine the size of the4 earthquake in the first several seconds. The current JMA system uses the peak displacement amplitude for the magnitude estimation, and the magnitude saturated at about M8 1 minute after the first P-wave arrival. . Magnitudes of smaller earthquakes can be correctly identified from the first few seconds of P- or S-wave arrivals, but this M9 event cannot be characterized in such a short time. The only way to correctly characterize the size of the Tohoku

  11. Determination of size distribution using neural networks

    NARCIS (Netherlands)

    Stevens, JH; Nijhuis, JAG; Spaanenburg, L; Mohammadian, M

    1999-01-01

    In this paper we present a novel approach to the estimation of size distributions of grains in water from images. External conditions such as the concentrations of grains in water cannot be controlled. This poses problems for local image analysis which tries to identify and measure single grains.

  12. Smartphone-Based Earthquake and Tsunami Early Warning in Chile

    Science.gov (United States)

    Brooks, B. A.; Baez, J. C.; Ericksen, T.; Barrientos, S. E.; Minson, S. E.; Duncan, C.; Guillemot, C.; Smith, D.; Boese, M.; Cochran, E. S.; Murray, J. R.; Langbein, J. O.; Glennie, C. L.; Dueitt, J.; Parra, H.

    2016-12-01

    Many locations around the world face high seismic hazard, but do not have the resources required to establish traditional earthquake and tsunami warning systems (E/TEW) that utilize scientific grade seismological sensors. MEMs accelerometers and GPS chips embedded in, or added inexpensively to, smartphones are sensitive enough to provide robust E/TEW if they are deployed in sufficient numbers. We report on a pilot project in Chile, one of the most productive earthquake regions world-wide. There, magnitude 7.5+ earthquakes occurring roughly every 1.5 years and larger tsunamigenic events pose significant local and trans-Pacific hazard. The smartphone-based network described here is being deployed in parallel to the build-out of a scientific-grade network for E/TEW. Our sensor package comprises a smartphone with internal MEMS and an external GPS chipset that provides satellite-based augmented positioning and phase-smoothing. Each station is independent of local infrastructure, they are solar-powered and rely on cellular SIM cards for communications. An Android app performs initial onboard processing and transmits both accelerometer and GPS data to a server employing the FinDer-BEFORES algorithm to detect earthquakes, producing an acceleration-based line source model for smaller magnitude earthquakes or a joint seismic-geodetic finite-fault distributed slip model for sufficiently large magnitude earthquakes. Either source model provides accurate ground shaking forecasts, while distributed slip models for larger offshore earthquakes can be used to infer seafloor deformation for local tsunami warning. The network will comprise 50 stations by Sept. 2016 and 100 stations by Dec. 2016. Since Nov. 2015, batch processing has detected, located, and estimated the magnitude for Mw>5 earthquakes. Operational since June, 2016, we have successfully detected two earthquakes > M5 (M5.5, M5.1) that occurred within 100km of our network while producing zero false alarms.

  13. Field size and dose distribution of electron beam

    International Nuclear Information System (INIS)

    Kang, Wee Saing

    1980-01-01

    The author concerns some relations between the field size and dose distribution of electron beams. The doses of electron beams are measured by either an ion chamber with an electrometer or by film for dosimetry. We analyzes qualitatively some relations; the energy of incident electron beams and depths of maximum dose, field sizes of electron beams and depth of maximum dose, field size and scatter factor, electron energy and scatter factor, collimator shape and scatter factor, electron energy and surface dose, field size and surface dose, field size and central axis depth dose, and field size and practical range. He meets with some results. They are that the field size of electron beam has influence on the depth of maximum dose, scatter factor, surface dose and central axis depth dose, scatter factor depends on the field size and energy of electron beam, and the shape of the collimator, and the depth of maximum dose and the surface dose depend on the energy of electron beam, but the practical range of electron beam is independent of field size

  14. Earthquake scaling laws for rupture geometry and slip heterogeneity

    Science.gov (United States)

    Thingbaijam, Kiran K. S.; Mai, P. Martin; Goda, Katsuichiro

    2016-04-01

    We analyze an extensive compilation of finite-fault rupture models to investigate earthquake scaling of source geometry and slip heterogeneity to derive new relationships for seismic and tsunami hazard assessment. Our dataset comprises 158 earthquakes with a total of 316 rupture models selected from the SRCMOD database (http://equake-rc.info/srcmod). We find that fault-length does not saturate with earthquake magnitude, while fault-width reveals inhibited growth due to the finite seismogenic thickness. For strike-slip earthquakes, fault-length grows more rapidly with increasing magnitude compared to events of other faulting types. Interestingly, our derived relationship falls between the L-model and W-model end-members. In contrast, both reverse and normal dip-slip events are more consistent with self-similar scaling of fault-length. However, fault-width scaling relationships for large strike-slip and normal dip-slip events, occurring on steeply dipping faults (δ~90° for strike-slip faults, and δ~60° for normal faults), deviate from self-similarity. Although reverse dip-slip events in general show self-similar scaling, the restricted growth of down-dip fault extent (with upper limit of ~200 km) can be seen for mega-thrust subduction events (M~9.0). Despite this fact, for a given earthquake magnitude, subduction reverse dip-slip events occupy relatively larger rupture area, compared to shallow crustal events. In addition, we characterize slip heterogeneity in terms of its probability distribution and spatial correlation structure to develop a complete stochastic random-field characterization of earthquake slip. We find that truncated exponential law best describes the probability distribution of slip, with observable scale parameters determined by the average and maximum slip. Applying Box-Cox transformation to slip distributions (to create quasi-normal distributed data) supports cube-root transformation, which also implies distinctive non-Gaussian slip

  15. Distribution Functions of Sizes and Fluxes Determined from Supra-Arcade Downflows

    Science.gov (United States)

    McKenzie, D.; Savage, S.

    2011-01-01

    The frequency distributions of sizes and fluxes of supra-arcade downflows (SADs) provide information about the process of their creation. For example, a fractal creation process may be expected to yield a power-law distribution of sizes and/or fluxes. We examine 120 cross-sectional areas and magnetic flux estimates found by Savage & McKenzie for SADs, and find that (1) the areas are consistent with a log-normal distribution and (2) the fluxes are consistent with both a log-normal and an exponential distribution. Neither set of measurements is compatible with a power-law distribution nor a normal distribution. As a demonstration of the applicability of these findings to improved understanding of reconnection, we consider a simple SAD growth scenario with minimal assumptions, capable of producing a log-normal distribution.

  16. Modelling the elements of country vulnerability to earthquake disasters.

    Science.gov (United States)

    Asef, M R

    2008-09-01

    Earthquakes have probably been the most deadly form of natural disaster in the past century. Diversity of earthquake specifications in terms of magnitude, intensity and frequency at the semicontinental scale has initiated various kinds of disasters at a regional scale. Additionally, diverse characteristics of countries in terms of population size, disaster preparedness, economic strength and building construction development often causes an earthquake of a certain characteristic to have different impacts on the affected region. This research focuses on the appropriate criteria for identifying the severity of major earthquake disasters based on some key observed symptoms. Accordingly, the article presents a methodology for identification and relative quantification of severity of earthquake disasters. This has led to an earthquake disaster vulnerability model at the country scale. Data analysis based on this model suggested a quantitative, comparative and meaningful interpretation of the vulnerability of concerned countries, and successfully explained which countries are more vulnerable to major disasters.

  17. Body size diversity and frequency distributions of Neotropical cichlid fishes (Cichliformes: Cichlidae: Cichlinae.

    Directory of Open Access Journals (Sweden)

    Sarah E Steele

    Full Text Available Body size is an important correlate of life history, ecology and distribution of species. Despite this, very little is known about body size evolution in fishes, particularly freshwater fishes of the Neotropics where species and body size diversity are relatively high. Phylogenetic history and body size data were used to explore body size frequency distributions in Neotropical cichlids, a broadly distributed and ecologically diverse group of fishes that is highly representative of body size diversity in Neotropical freshwater fishes. We test for divergence, phylogenetic autocorrelation and among-clade partitioning of body size space. Neotropical cichlids show low phylogenetic autocorrelation and divergence within and among taxonomic levels. Three distinct regions of body size space were identified from body size frequency distributions at various taxonomic levels corresponding to subclades of the most diverse tribe, Geophagini. These regions suggest that lineages may be evolving towards particular size optima that may be tied to specific ecological roles. The diversification of Geophagini appears to constrain the evolution of body size among other Neotropical cichlid lineages; non-Geophagini clades show lower species-richness in body size regions shared with Geophagini. Neotropical cichlid genera show less divergence and extreme body size than expected within and among tribes. Body size divergence among species may instead be present or linked to ecology at the community assembly scale.

  18. A suite of exercises for verifying dynamic earthquake rupture codes

    Science.gov (United States)

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  19. Relocation and Seismogenic Structure of the 1998 Zhangbei-Shangyi Earthquake Sequence

    Science.gov (United States)

    Yang, Z.

    2002-05-01

    An earthquake of magnitude 6.2 occurred in the Zhangbei-Shangyi region in the northern China on January 10, 1998. The earthquake was about 180km to the northwest of the Beijing City and was felt at Beijing. This earthquake is the largest event since the 1976 great Tangshan earthquake of magnitude 7.8 in the northern China. Historically seismicity in the Zhangbei-Shangyi region was very low. In the epicentral area no active fault constituting the seismogenic geological features capable of generating moderate earthquakes like this earthquake has been found before the earthquake. Nor surface faulting has been observed after the earthquake. Field geological investigation after the earthquake found two conjugate surface features trending NNE-NE and NNW-WNW. Because of the geometry of the seismic network the hypocentral distribution of the Zhangbei-Shangyi earthquake sequence given by routine location exhibited no any preferable orientation feature. In this study the Zhangbei-Shangyi earthquake and its aftershocks with magnitude equal or lager than 3.0 were relocated using both the master event relative relocation algorithm and the double-difference earthquake relocation algorithm (Waldhauser, 2000). Both algorithms gave consistent results within accuracy limits. The epicenter of the main shock was 41.15­aN and 114.46­aE, which was located 4km apart from the macro-epicenter of this event. The focal depth of the main shock was 15 km. The epicenters of aftershocks of this earthquake sequence distribute in a nearly vertical plane and its vicinity with orientation N20­aE. The results of relocation for the Zhangbei-Shangyi earthquake sequence clearly indicate that the seismogenic structure of this event is a N20­aE striking fault with right-lateral reverse slip, and that the occurrence of the Zhangbei-Shangyi earthquake is tectonically driven by the horizontal and oriented ENE compression stress, same as that of the stress field in northern China.

  20. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  1. Distribution of very low frequency earthquakes in the Nankai accretionary prism influenced by a subducting-ridge

    Science.gov (United States)

    Toh, Akiko; Obana, Koichiro; Araki, Eiichiro

    2018-01-01

    We investigated the distribution of very low frequency earthquakes (VLFEs) that occurred in the shallow accretionary prism of the eastern Nankai trough during one week of VLFE activity in October 2015. They were recorded very close from the sources by an array of broadband ocean bottom seismometers (BBOBSs) equipped in Dense Oceanfloor Network system for Earthquakes and Tsunamis 1 (DONET1). The locations of VLFEs estimated using a conventional envelope correlation method appeared to have a large scatter, likely due to effects of 3D structures near the seafloor and/or sources that the method could not handle properly. Therefore, we assessed their relative locations by introducing a hierarchal clustering analysis based on patterns of relative peak times of envelopes within the array measured for each VLFE. The results suggest that, in the northeastern side of the network, all the detected VLFEs occur 30-40 km landward of the trench axis, near the intersection of a splay fault with the seafloor. Some likely occurred along the splay fault. On the other hand, many VLFEs occur closer to the trench axis in the southwestern side, likely along the plate boundary, and the VLFE activity in the shallow splay fault appears less intense, compared to the northeastern side. Although this could be a snap-shot of activity that becomes more uniform over longer-term, the obtained distribution can be reasonably explained by the change in shear stresses and pore pressures caused by a subducting-ridge below the northeastern side of DONET1. The change in stress state along the strike of the plate boundary, inferred from the obtained VLFE distribution, should be an important indicator of the strain release pattern and localised variations in the tsunamigenic potential of this region.

  2. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  3. Forecasting grain size distribution of coal cut by a shearer loader

    Energy Technology Data Exchange (ETDEWEB)

    Sikora, W; Chodura, J; Siwiec, J

    1983-02-01

    Analyzed are effects of shearer loader design on grain size distribution of coal, particularly on proportion of the finest size group and proportion of largest coal grains. The method developed by the IGD im. A.A. Skochinski Institute in Moscow is used. Effects of cutting tool design and mechanical coal properties are analyzed. Of the evaluated factors, two are of decisive importance: thickness of the coal chip cut by a cutting tool and coefficient of coal disintegration which characterizes coal behavior during cutting. Grain size distribution is also influenced by cutting tool geometry. Two elements of cutting tool design are of major importance: dimensions of the cutting edge and angle of attack. Effects of cutting tool design and coal mechanical properties on grain size distribution are shown in 12 diagrams. Using the forecasting method developed by the IGD im. A.A. Skochinski Institute in Moscow grain size distribution of coal cut by three shearer loaders is calculated: the KWB-3RDU with a drum 1600 mm in diameter, the KWB-6W with a drum 2500 mm in diameter, and a shearer loader being developed with a 1550 mm drum. The results of comparative evaluations are shown in two tables. 5 references.

  4. Update earthquake risk assessment in Cairo, Egypt

    Science.gov (United States)

    Badawy, Ahmed; Korrat, Ibrahim; El-Hadidy, Mahmoud; Gaber, Hanan

    2017-07-01

    The Cairo earthquake (12 October 1992; m b = 5.8) is still and after 25 years one of the most painful events and is dug into the Egyptians memory. This is not due to the strength of the earthquake but due to the accompanied losses and damages (561 dead; 10,000 injured and 3000 families lost their homes). Nowadays, the most frequent and important question that should rise is "what if this earthquake is repeated today." In this study, we simulate the same size earthquake (12 October 1992) ground motion shaking and the consequent social-economic impacts in terms of losses and damages. Seismic hazard, earthquake catalogs, soil types, demographics, and building inventories were integrated into HAZUS-MH to produce a sound earthquake risk assessment for Cairo including economic and social losses. Generally, the earthquake risk assessment clearly indicates that "the losses and damages may be increased twice or three times" in Cairo compared to the 1992 earthquake. The earthquake risk profile reveals that five districts (Al-Sahel, El Basateen, Dar El-Salam, Gharb, and Madinat Nasr sharq) lie in high seismic risks, and three districts (Manshiyat Naser, El-Waily, and Wassat (center)) are in low seismic risk level. Moreover, the building damage estimations reflect that Gharb is the highest vulnerable district. The analysis shows that the Cairo urban area faces high risk. Deteriorating buildings and infrastructure make the city particularly vulnerable to earthquake risks. For instance, more than 90 % of the estimated buildings damages are concentrated within the most densely populated (El Basateen, Dar El-Salam, Gharb, and Madinat Nasr Gharb) districts. Moreover, about 75 % of casualties are in the same districts. Actually, an earthquake risk assessment for Cairo represents a crucial application of the HAZUS earthquake loss estimation model for risk management. Finally, for mitigation, risk reduction, and to improve the seismic performance of structures and assure life safety

  5. Large LOCA-earthquake combination probability assessment - Load combination program. Project 1 summary report

    Energy Technology Data Exchange (ETDEWEB)

    Lu, S; Streit, R D; Chou, C K

    1980-01-01

    This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10{sup -12}). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)

  6. Large LOCA-earthquake combination probability assessment - Load combination program. Project 1 summary report

    International Nuclear Information System (INIS)

    Lu, S.; Streit, R.D.; Chou, C.K.

    1980-01-01

    This report summarizes work performed for the U.S. Nuclear Regulatory Commission (NRC) by the Load Combination Program at the Lawrence Livermore National Laboratory to establish a technical basis for the NRC to use in reassessing its requirement that earthquake and large loss-of-coolant accident (LOCA) loads be combined in the design of nuclear power plants. A systematic probabilistic approach is used to treat the random nature of earthquake and transient loading to estimate the probability of large LOCAs that are directly and indirectly induced by earthquakes. A large LOCA is defined in this report as a double-ended guillotine break of the primary reactor coolant loop piping (the hot leg, cold leg, and crossover) of a pressurized water reactor (PWR). Unit 1 of the Zion Nuclear Power Plant, a four-loop PWR-1, is used for this study. To estimate the probability of a large LOCA directly induced by earthquakes, only fatigue crack growth resulting from the combined effects of thermal, pressure, seismic, and other cyclic loads is considered. Fatigue crack growth is simulated with a deterministic fracture mechanics model that incorporates stochastic inputs of initial crack size distribution, material properties, stress histories, and leak detection probability. Results of the simulation indicate that the probability of a double-ended guillotine break, either with or without an earthquake, is very small (on the order of 10 -12 ). The probability of a leak was found to be several orders of magnitude greater than that of a complete pipe rupture. A limited investigation involving engineering judgment of a double-ended guillotine break indirectly induced by an earthquake is also reported. (author)

  7. What is the earthquake fracture energy?

    Science.gov (United States)

    Di Toro, G.; Nielsen, S. B.; Passelegue, F. X.; Spagnuolo, E.; Bistacchi, A.; Fondriest, M.; Murphy, S.; Aretusini, S.; Demurtas, M.

    2016-12-01

    The energy budget of an earthquake is one of the main open questions in earthquake physics. During seismic rupture propagation, the elastic strain energy stored in the rock volume that bounds the fault is converted into (1) gravitational work (relative movement of the wall rocks bounding the fault), (2) in- and off-fault damage of the fault zone rocks (due to rupture propagation and frictional sliding), (3) frictional heating and, of course, (4) seismic radiated energy. The difficulty in the budget determination arises from the measurement of some parameters (e.g., the temperature increase in the slipping zone which constraints the frictional heat), from the not well constrained size of the energy sinks (e.g., how large is the rock volume involved in off-fault damage?) and from the continuous exchange of energy from different sinks (for instance, fragmentation and grain size reduction may result from both the passage of the rupture front and frictional heating). Field geology studies, microstructural investigations, experiments and modelling may yield some hints. Here we discuss (1) the discrepancies arising from the comparison of the fracture energy measured in experiments reproducing seismic slip with the one estimated from seismic inversion for natural earthquakes and (2) the off-fault damage induced by the diffusion of frictional heat during simulated seismic slip in the laboratory. Our analysis suggests, for instance, that the so called earthquake fracture energy (1) is mainly frictional heat for small slips and (2), with increasing slip, is controlled by the geometrical complexity and other plastic processes occurring in the damage zone. As a consequence, because faults are rapidly and efficiently lubricated upon fast slip initiation, the dominant dissipation mechanism in large earthquakes may not be friction but be the off-fault damage due to fault segmentation and stress concentrations in a growing region around the fracture tip.

  8. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  9. Retrieval of size distribution for urban aerosols using multispectral optical data

    International Nuclear Information System (INIS)

    Kocifaj, M; Horvath, H

    2005-01-01

    We are dealing with retrieval of aerosol size distribution using multispectral extinction data collected in highly industrialized urban region. Especially, a role of the particle morphology is in the focus of this work. As well known, at present, still many retrieval algorithms are based on simple Lorenz-Mie's theory applicable for perfectly spherical and homogeneous particles, because that approach is fast and can handle the whole size distribution of particles. However, the solid-phase aerosols never render simple geometries, and rather than being spherical or spheroidal they are quite irregular. It is shown, that identification of the modal radius a M of both, the size distribution f(a) and the distribution of geometrical cross section s(a) of aerosol particles is not significantly influenced by the particle's morphology in case the aspect ratio is smaller than 2 and the particles are randomly oriented in the atmospheric environment. On the other hand, the amount of medium-sized particles (radius of which is larger than the modal radius) can be underestimated if distribution of non-spherical grains is substituted by system of volume equivalent spheres. Retrieved volume content of fine aerosols (as characterized by PM 2.5 and PM 1.0 ) can be potentially affected by inappropriate assumption on the particle shape

  10. Particle size distribution of selected electronic nicotine delivery system products.

    Science.gov (United States)

    Oldham, Michael J; Zhang, Jingjie; Rusyniak, Mark J; Kane, David B; Gardner, William P

    2018-03-01

    Dosimetry models can be used to predict the dose of inhaled material, but they require several parameters including particle size distribution. The reported particle size distributions for aerosols from electronic nicotine delivery system (ENDS) products vary widely and don't always identify a specific product. A low-flow cascade impactor was used to determine the particle size distribution [mass median aerodynamic diameter (MMAD); geometric standard deviation (GSD)] from 20 different cartridge based ENDS products. To assess losses and vapor phase amount, collection efficiency of the system was measured by comparing the collected mass in the impactor to the difference in ENDS product mass. The levels of nicotine, glycerin, propylene glycol, water, and menthol in the formulations of each product were also measured. Regardless of the ENDS product formulation, the MMAD of all tested products was similar and ranged from 0.9 to 1.2 μm with a GSD ranging from 1.7 to 2.2. There was no consistent pattern of change in the MMAD and GSD as a function of number of puffs (cartridge life). The collection efficiency indicated that 9%-26% of the generated mass was deposited in the collection system or was in the vapor phase. The particle size distribution data are suitable for use in aerosol dosimetry programs. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Revisiting the November 27, 1945 Makran (Mw=8.2) interplate earthquake

    Science.gov (United States)

    Zarifi, Z.; Raeesi, M.

    2012-04-01

    Makran Subduction Zone (MSZ) in southern Iran and southwestern Pakistan is a zone of convergence, where the remnant oceanic crust of Arabian plate is subducting beneath the Eurasian plate with a rate of less than 30 mm/yr. The November 27, 1945 earthquake (Mw=8.2) in eastern section of Makran followed by a tsunami, at some points 15 meters high. More than 4000 victims and widespread devastation along the coastal area of Pakistan, Iran, Oman and India are reported for this earthquake. We have collected the old seismograms of the 1945 earthquake and its largest following earthquake (August 5, 1947, Mw=7.3) from a number of stations around the globe. Using ISS data, we relocated these two events. We used the teleseismic body-waveform inversion code of Kikuchi and Kanamori to determine the slip distribution of these two earthquakes for the first time. The results show that the extent of rupture of the 1945 earthquake is larger than what previously had been approximated in other studies. The slip distribution suggests two distinct sets of asperities with different behavior in the west close to Pasni and in the east close to Ormara. The highest slip was obtained for an area between these two cities which shows geological evidence of rapid uplift. To associate this behavior with the structure of slab interface we studied the TPGA (Trench Parallel Free-air Gravity Anomaly) and TPBA (Trench Parallel Bouguer Anomaly) in MSZ. The results of TPGA does not show the expected phenomenon, which is the correlation of asperities with the area of highly negative TPGA. However, TPBA can make correlation between the observed slip distribution and the structure of slab interface. Using the topography and gravity profiles perpendicular to trench and along the MSZ, we could observe the segmentation in the slab interface. This confirms that we barely expect that the whole interface releases energy in one single megathrust earthquake. Current seismicity in MSZ, although sparse, can fairly

  12. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    Science.gov (United States)

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  13. 2017 Valparaíso earthquake sequence and the megathrust patchwork of central Chile

    NARCIS (Netherlands)

    Nealy, Jennifer L.; Herman, Matthew W.; Moore, Ginevra L.; Hayes, Gavin P.; Benz, Harley M.; Bergman, Eric A.; Barrientos, Sergio E.

    2017-01-01

    In April 2017, a sequence of earthquakes offshore Valparaíso, Chile, raised concerns of a potential megathrust earthquake in the near future. The largest event in the 2017 sequence was a M6.9 on 24 April, seemingly colocated with the last great-sized earthquake in the region—a M8.0 in March 1985.

  14. Large early afterslip following the 1995/10/09 Mw 8 Jalisco, Mexico earthquake

    Science.gov (United States)

    Hjörleifsdóttir, Vala; Sánchez Reyes, Hugo Samuel; Ruiz-Angulo, Angel; Ramirez-Herrera, Maria Teresa; Castillo-Aja, Rosío; Krishna Singh, Shri; Ji, Chen

    2017-04-01

    The behaviour of slip close to the trench during earthquakes is not well understood, with some earthquakes breaking only the near trench area, most earthquakes breaking only the deeper part of the fault interface, whereas a few break both simultaneously. Observations of multiple earthquakes breaking different down dip segments of the same subduction segment are rare. The 1995 Mw 8 Jalisco earthquake, seems to have broken the near trench area, as evidenced by anomalously small accelerations for its size, the excitation of a tsunami, a small Ms relative to Mw and a small ratio between the radiated energy and moment (Pacheco et al 1997). However, slip models obtained using GPS campaign data, indicate slip near shore (Melbourne et al 1997, Hutton et al 2001). We invert tele seismic P- and S-waves, Rayleigh and Love waves, as well as the static offsets measured by campaign GPS models, to obtain the slip distribution on the fault as a function of time, during the earthquake. We confirm that the slip models obtained using only seismic data are most consistent with slip near the trench, whereas those obtained using only GPS data are consistent with slip closer to the coast. We find remarkable similarity with models of other researchers (Hutton et al 2001, Mendoza et al 1999) using the same datasets, even though the slip distributions from each dataset are almost complementary. To resolve this inconsistency we jointly invert the datasets. However, we find that the joint inversions do not produce adequate fits to both seismic and GPS data. Furthermore, we model tsunami observations on the coast, to constrain further the plausible slip models. Assuming that the discrepancy stems from slip that occurred within the time window between the campaign GPS measurements, but not during the earthquake, we model the residual displacements by very localised slip on the interface down dip from the coseismic slip. Aftershocks (Pacheco et al 1997) align on mostly between the non

  15. The earthquake lights (EQL of the 6 April 2009 Aquila earthquake, in Central Italy

    Directory of Open Access Journals (Sweden)

    C. Fidani

    2010-05-01

    Full Text Available A seven-month collection of testimonials about the 6 April 2009 earthquake in Aquila, Abruzzo region, Italy, was compiled into a catalogue of non-seismic phenomena. Luminous phenomena were often reported starting about nine months before the strong shock and continued until about five months after the shock. A summary and list of the characteristics of these sightings was made according to 20th century classifications and a comparison was made with the Galli outcomes. These sightings were distributed over a large area around the city of Aquila, with a major extension to the north, up to 50 km. Various earthquake lights were correlated with several landscape characteristics and the source and dynamic of the earthquake. Some preliminary considerations on the location of the sightings suggest a correlation between electrical discharges and asperities, while flames were mostly seen along the Aterno Valley.

  16. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    Science.gov (United States)

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  17. Earthquake accelerations estimation for construction calculating with different responsibility degrees

    International Nuclear Information System (INIS)

    Dolgaya, A.A.; Uzdin, A.M.; Indeykin, A.V.

    1993-01-01

    The investigation object is the design amplitude of accelerograms, which are used in the evaluation of seismic stability of responsible structures, first and foremost, NPS. The amplitude level is established depending on the degree of responsibility of the structure and on the prevailing period of earthquake action on the construction site. The investigation procedure is based on statistical analysis of 310 earthquakes. At the first stage of statistical data-processing we established the correlation dependence of both the mathematical expectation and root-mean-square deviation of peak acceleration of the earthquake on its prevailing period. At the second stage the most suitable law of acceleration distribution about the mean was chosen. To determine of this distribution parameters, we specified the maximum conceivable acceleration, the excess of which is not allowed. Other parameters of distribution are determined according to statistical data. At the third stage the dependencies of design amplitude on the prevailing period of seismic effect for different structures and equipment were established. The obtained data made it possible to recommend to fix the level of safe-shutdown (SSB) and operating basis earthquakes (OBE) for objects of various responsibility categories when designing NPS. (author)

  18. Size distribution of radon daughter particles in uranium mine atmospheres

    International Nuclear Information System (INIS)

    George, A.C.; Hinchliffe, L.; Sladowski, R.

    1977-07-01

    An investigation of the particle size distribution and other properties of radon daughters in uranium mines was reported earlier but only summaries of the data were presented. This report consists mainly of tables of detailed measurements that were omitted in the original article. The tabulated data include the size distributions, uncombined fractions and ratios of radon daughters as well as the working levels, radon concentrations, condensation nuclei concentrations, temperature, and relative humidity. The measurements were made in 27 locations in four large underground mines in New Mexico during typical mining operations. The size distributions of the radon daughters were log normal. The activity median diameters ranged from 0.09 μm to 0.3 μm with a mean of 0.17 μm. Geometric standard deviations were from 1.3 to 4 with a mean of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean of 0.04

  19. Measuring agglomerate size distribution and dependence of localized surface plasmon resonance absorbance on gold nanoparticle agglomerate size using analytical ultracentrifugation.

    Science.gov (United States)

    Zook, Justin M; Rastogi, Vinayak; Maccuspie, Robert I; Keene, Athena M; Fagan, Jeffrey

    2011-10-25

    Agglomeration of nanoparticles during measurements in relevant biological and environmental media is a frequent problem in nanomaterial property characterization. The primary problem is typically that any changes to the size distribution can dramatically affect the potential nanotoxicity or other size-determined properties, such as the absorbance signal in a biosensor measurement. Herein we demonstrate analytical ultracentrifugation (AUC) as a powerful method for measuring two critical characteristics of nanoparticle (NP) agglomerates in situ in biological media: the NP agglomerate size distribution, and the localized surface plasmon resonance (LSPR) absorbance spectrum of precise sizes of gold NP agglomerates. To characterize the size distribution, we present a theoretical framework for calculating the hydrodynamic diameter distribution of NP agglomerates from their sedimentation coefficient distribution. We measure sedimentation rates for monomers, dimers, and trimers, as well as for larger agglomerates with up to 600 NPs. The AUC size distributions were found generally to be broader than the size distributions estimated from dynamic light scattering and diffusion-limited colloidal aggregation theory, an alternative bulk measurement method that relies on several assumptions. In addition, the measured sedimentation coefficients can be used in nanotoxicity studies to predict how quickly the agglomerates sediment out of solution under normal gravitational forces, such as in the environment. We also calculate the absorbance spectra for monomer, dimer, trimer, and larger gold NP agglomerates up to 600 NPs, to enable a better understanding of LSPR biosensors. Finally, we validate a new method that uses these spectra to deconvolute the net absorbance spectrum of an unknown bulk sample and approximate the proportions of monomers, dimers, and trimers in a polydisperse sample of small agglomerates, so that every sample does not need to be measured by AUC. These results

  20. Source analysis using regional empirical Green's functions: The 2008 Wells, Nevada, earthquake

    Science.gov (United States)

    Mendoza, C.; Hartzell, S.

    2009-01-01

    We invert three-component, regional broadband waveforms recorded for the 21 February 2008 Wells, Nevada, earthquake using a finite-fault methodology that prescribes subfault responses using eight MW∼4 aftershocks as empirical Green's functions (EGFs) distributed within a 20-km by 21.6-km fault area. The inversion identifies a seismic moment of 6.2 x 1024 dyne-cm (5.8 MW) with slip concentrated in a compact 6.5-km by 4-km region updip from the hypocenter. The peak slip within this localized area is 88 cm and the stress drop is 72 bars, which is higher than expected for Basin and Range normal faults in the western United States. The EGF approach yields excellent fits to the complex regional waveforms, accounting for strong variations in wave propagation and site effects. This suggests that the procedure is useful for studying moderate-size earthquakes with limited teleseismic or strong-motion data and for examining uncertainties in slip models obtained using theoretical Green's functions.

  1. Comparison of Different Approach of Back Projection Method in Retrieving the Rupture Process of Large Earthquakes

    Science.gov (United States)

    Tan, F.; Wang, G.; Chen, C.; Ge, Z.

    2016-12-01

    Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences

  2. The key role of eyewitnesses in rapid earthquake impact assessment

    Science.gov (United States)

    Bossu, Rémy; Steed, Robert; Mazet-Roux, Gilles; Roussel, Frédéric; Etivant, Caroline

    2014-05-01

    Uncertainties in rapid earthquake impact models are intrinsically large even when excluding potential indirect losses (fires, landslides, tsunami…). The reason is that they are based on several factors which are themselves difficult to constrain, such as the geographical distribution of shaking intensity, building type inventory and vulnerability functions. The difficulties can be illustrated by two boundary cases. For moderate (around M6) earthquakes, the size of potential damage zone and the epicentral location uncertainty share comparable dimension of about 10-15km. When such an earthquake strikes close to an urban area, like in 1999, in Athens (M5.9), earthquake location uncertainties alone can lead to dramatically different impact scenario. Furthermore, for moderate magnitude, the overall impact is often controlled by individual accidents, like in 2002 in Molise, Italy (M5.7), in Bingol, Turkey (M6.4) in 2003 or in Christchurch, New Zealand (M6.3) where respectively 23 out of 30, 84 out of 176 and 115 out of 185 of the causalities perished in a single building failure. Contrastingly, for major earthquakes (M>7), the point source approximation is not valid anymore, and impact assessment requires knowing exactly where the seismic rupture took place, whether it was unilateral, bilateral etc.… and this information is not readily available directly after the earthquake's occurrence. In-situ observations of actual impact provided by eyewitnesses can dramatically reduce impact models uncertainties. We will present the overall strategy developed at the EMSC which comprises of crowdsourcing and flashsourcing techniques, the development of citizen operated seismic networks, and the use of social networks to engage with eyewitnesses within minutes of an earthquake occurrence. For instance, testimonies are collected through online questionnaires available in 32 languages and automatically processed in maps of effects. Geo-located pictures are collected and then

  3. Pore size distribution effect on rarefied gas transport in porous media

    Science.gov (United States)

    Hori, Takuma; Yoshimoto, Yuta; Takagi, Shu; Kinefuchi, Ikuya

    2017-11-01

    Gas transport phenomena in porous media are known to strongly influence the performance of devices such as gas separation membranes and fuel cells. Knudsen diffusion is a dominant flow regime in these devices since they have nanoscale pores. Many experiments have shown that these porous media have complex structures and pore size distributions; thus, the diffusion coefficient in these media cannot be easily assessed. Previous studies have reported that the characteristic pore diameter of porous media can be defined in light of the pore size distribution; however, tortuosity factor, which is necessary for the evaluation of diffusion coefficient, is still unknown without gas transport measurements or simulations. Thus, the relation between pore size distributions and tortuosity factors is required to obtain the gas transport properties. We perform numerical simulations to prove the relation between them. Porous media are numerically constructed while satisfying given pore size distributions. Then, the mean-square displacement simulation is performed to obtain the tortuosity factors of the constructed porous media.. This paper is based on results obtained from a project commissioned by the New Energy and Industrial Development Organization (NEDO).

  4. A New Perspective on Fault Geometry and Slip Distribution of the 2009 Dachaidan Mw 6.3 Earthquake from InSAR Observations.

    Science.gov (United States)

    Liu, Yang; Xu, Caijun; Wen, Yangmao; Fok, Hok Sum

    2015-07-10

    On 28 August 2009, the northern margin of the Qaidam basin in the Tibet Plateau was ruptured by an Mw 6.3 earthquake. This study utilizes the Envisat ASAR images from descending Track 319 and ascending Track 455 for capturing the coseismic deformation resulting from this event, indicating that the earthquake fault rupture does not reach to the earth's surface. We then propose a four-segmented fault model to investigate the coseismic deformation by determining the fault parameters, followed by inverting slip distribution. The preferred fault model shows that the rupture depths for all four fault planes mainly range from 2.0 km to 7.5 km, comparatively shallower than previous results up to ~13 km, and that the slip distribution on the fault plane is complex, exhibiting three slip peaks with a maximum of 2.44 m at a depth between 4.1 km and 4.9 km. The inverted geodetic moment is 3.85 × 10(18) Nm (Mw 6.36). The 2009 event may rupture from the northwest to the southeast unilaterally, reaching the maximum at the central segment.

  5. The degree distribution of fixed act-size collaboration networks

    Indian Academy of Sciences (India)

    In this paper, we investigate a special evolving model of collaboration net-works, where the act-size is fixed. Based on the first-passage probability of Markov chain theory, this paper provides a rigorous proof for the existence of a limiting degree distribution of this model and proves that the degree distribution obeys the ...

  6. The Size Distribution of Stardust Injected into the ISM

    Science.gov (United States)

    Krueger, D.; Sedlmayr, E.

    1996-01-01

    A multi-component method for the description of the evolution of the grain size distribution in consideration of a size dependent grain drift and growth rate is applied in order to model dust driven winds around cool C-stars. Grain drift introduces several modifications concerning dust growth: on one hand the residence time in the region of efficient growth is reduced, on the other hand the growth efficiency is higher due to an increased collisional rate. For carbon grains the surface density of radical sites is increased, but on the other hand there is a reduction of the sticking efficiency of the growth species for drift velocities larger than a few km/s. It is found that the consideration of drift results in a considerable distortion of the size distribution as compared to the case of zero drift velocity. Generally, there are less, but larger grains if drift is included.

  7. Analysis of the Earthquake Impact towards water-based fire extinguishing system

    Science.gov (United States)

    Lee, J.; Hur, M.; Lee, K.

    2015-09-01

    Recently, extinguishing system installed in the building when the earthquake occurred at a separate performance requirements. Before the building collapsed during the earthquake, as a function to maintain a fire extinguishing. In particular, the automatic sprinkler fire extinguishing equipment, such as after a massive earthquake without damage to piping also must maintain confidentiality. In this study, an experiment installed in the building during the earthquake, the water-based fire extinguishing saw grasp the impact of the pipe. Experimental structures for water-based fire extinguishing seismic construction step by step, and then applied to the seismic experiment, the building appears in the extinguishing of the earthquake response of the pipe was measured. Construction of acceleration caused by vibration being added to the size and the size of the displacement is measured and compared with the data response of the pipe from the table, thereby extinguishing water piping need to enhance the seismic analysis. Define the seismic design category (SDC) for the four groups in the building structure with seismic criteria (KBC2009) designed according to the importance of the group and earthquake seismic intensity. The event of a real earthquake seismic analysis of Category A and Category B for the seismic design of buildings, the current fire-fighting facilities could have also determined that the seismic performance. In the case of seismic design categories C and D are installed in buildings to preserve the function of extinguishing the required level of seismic retrofit design is determined.

  8. Species distribution model transferability and model grain size - finer may not always be better.

    Science.gov (United States)

    Manzoor, Syed Amir; Griffiths, Geoffrey; Lukac, Martin

    2018-05-08

    Species distribution models have been used to predict the distribution of invasive species for conservation planning. Understanding spatial transferability of niche predictions is critical to promote species-habitat conservation and forecasting areas vulnerable to invasion. Grain size of predictor variables is an important factor affecting the accuracy and transferability of species distribution models. Choice of grain size is often dependent on the type of predictor variables used and the selection of predictors sometimes rely on data availability. This study employed the MAXENT species distribution model to investigate the effect of the grain size on model transferability for an invasive plant species. We modelled the distribution of Rhododendron ponticum in Wales, U.K. and tested model performance and transferability by varying grain size (50 m, 300 m, and 1 km). MAXENT-based models are sensitive to grain size and selection of variables. We found that over-reliance on the commonly used bioclimatic variables may lead to less accurate models as it often compromises the finer grain size of biophysical variables which may be more important determinants of species distribution at small spatial scales. Model accuracy is likely to increase with decreasing grain size. However, successful model transferability may require optimization of model grain size.

  9. Causes of earthquake spatial distribution beneath the Izu-Bonin-Mariana Arc

    Science.gov (United States)

    Kong, Xiangchao; Li, Sanzhong; Wang, Yongming; Suo, Yanhui; Dai, Liming; Géli, Louis; Zhang, Yong; Guo, Lingli; Wang, Pengcheng

    2018-01-01

    Statistics about the occurrence frequency of earthquakes (1973-2015) at shallow, intermediate and great depths along the Izu-Bonin-Mariana (IBM) Arc is presented and a percent perturbation relative to P-wave mean value (LLNL-G3Dv3) is adopted to show the deep structure. The correlation coefficient between the subduction rate and the frequency of shallow seismic events along the IBM is 0.605, proving that the subduction rate is an important factor for shallow seismic events. The relationship between relief amplitudes of the seafloor and earthquake occurrences implies that some seamount chains riding on the Pacific seafloor may have an effect on intermediate-depth seismic events along the IBM. A probable hypothesis is proposed that the seamounts or surrounding seafloor with high degree of fracture may bring numerous hydrous minerals into the deep and may result in a different thermal structure compared to the seafloor where no seamounts are subducted. Fluids from the seamounts or surrounding seafloor are released to trigger earthquakes at intermediate-depth. Deep events in the northern and southern Mariana arc are likely affected by a horizontal propagating tear parallel to the trench.

  10. The Geological Susceptibility of Induced Earthquakes in the Duvernay Play

    Science.gov (United States)

    Pawley, Steven; Schultz, Ryan; Playter, Tiffany; Corlett, Hilary; Shipman, Todd; Lyster, Steven; Hauck, Tyler

    2018-02-01

    Presently, consensus on the incorporation of induced earthquakes into seismic hazard has yet to be established. For example, the nonstationary, spatiotemporal nature of induced earthquakes is not well understood. Specific to the Western Canada Sedimentary Basin, geological bias in seismogenic activation potential has been suggested to control the spatial distribution of induced earthquakes regionally. In this paper, we train a machine learning algorithm to systemically evaluate tectonic, geomechanical, and hydrological proxies suspected to control induced seismicity. Feature importance suggests that proximity to basement, in situ stress, proximity to fossil reef margins, lithium concentration, and rate of natural seismicity are among the strongest model predictors. Our derived seismogenic potential map faithfully reproduces the current distribution of induced seismicity and is suggestive of other regions which may be prone to induced earthquakes. The refinement of induced seismicity geological susceptibility may become an important technique to identify significant underlying geological features and address induced seismic hazard forecasting issues.

  11. Correction of bubble size distributions from transmission electron microscopy observations

    International Nuclear Information System (INIS)

    Kirkegaard, P.; Eldrup, M.; Horsewell, A.; Skov Pedersen, J.

    1996-01-01

    Observations by transmission electron microscopy of a high density of gas bubbles in a metal matrix yield a distorted size distribution due to bubble overlap and bubble escape from the surface. A model is described that reconstructs 3-dimensional bubble size distributions from 2-dimensional projections on taking these effects into account. Mathematically, the reconstruction is an ill-posed inverse problem, which is solved by regularization technique. Extensive Monte Carlo simulations support the validity of our model. (au) 1 tab., 32 ills., 32 refs

  12. Low-latitude ionospheric disturbances associated with earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Depueva, A.; Rotanova, N. [Russian Academy of Sciences, Inst. of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation, Moscow (Russian Federation)

    2001-04-01

    Topside electron density measured on satellite board was analysed. It was shown that before the two considered earthquakes with their epicenters located at low and equatorial latitudes the stable modification of the ionosphere both at and above the height of the F-layer peak was observed. Electron density gradually decreased and its spatial distribution looked like a funnel located either immediately over the epicenter or from its one side. Electron density irregularities of 300-500 km size in a meridional direction also occurred side by side with aforesaid background large-scale depletions. For detection of local structures of more than 1000 km extent, the method of natural orthogonal component expansion was applied; spectra of smaller scale inhomogeneities were investigated by means of the Blackman-Tukey method. A proposal is made for observed experimental data interpretation.

  13. Global volcanic earthquake swarm database and preliminary analysis of volcanic earthquake swarm duration

    Directory of Open Access Journals (Sweden)

    S. R. McNutt

    1996-06-01

    Full Text Available Global data from 1979 to 1989 pertaining to volcanic earthquake swarms have been compiled into a custom-designed relational database. The database is composed of three sections: 1 a section containing general information on volcanoes, 2 a section containing earthquake swarm data (such as dates of swarm occurrence and durations, and 3 a section containing eruption information. The most abundant and reliable parameter, duration of volcanic earthquake swarms, was chosen for preliminary analysis. The distribution of all swarm durations was found to have a geometric mean of 5.5 days. Precursory swarms were then separated from those not associated with eruptions. The geometric mean precursory swarm duration was 8 days whereas the geometric mean duration of swarms not associated with eruptive activity was 3.5 days. Two groups of precursory swarms are apparent when duration is compared with the eruption repose time. Swarms with durations shorter than 4 months showed no clear relationship with the eruption repose time. However, the second group, lasting longer than 4 months, showed a significant positive correlation with the log10 of the eruption repose period. The two groups suggest that different suites of physical processes are involved in the generation of volcanic earthquake swarms.

  14. A statistical analysis of North East Atlantic (submicron aerosol size distributions

    Directory of Open Access Journals (Sweden)

    M. Dall'Osto

    2011-12-01

    Full Text Available The Global Atmospheric Watch research station at Mace Head (Ireland offers the possibility to sample some of the cleanest air masses being imported into Europe as well as some of the most polluted being exported out of Europe. We present a statistical cluster analysis of the physical characteristics of aerosol size distributions in air ranging from the cleanest to the most polluted for the year 2008. Data coverage achieved was 75% throughout the year. By applying the Hartigan-Wong k-Means method, 12 clusters were identified as systematically occurring. These 12 clusters could be further combined into 4 categories with similar characteristics, namely: coastal nucleation category (occurring 21.3 % of the time, open ocean nucleation category (occurring 32.6% of the time, background clean marine category (occurring 26.1% of the time and anthropogenic category (occurring 20% of the time aerosol size distributions. The coastal nucleation category is characterised by a clear and dominant nucleation mode at sizes less than 10 nm while the open ocean nucleation category is characterised by a dominant Aitken mode between 15 nm and 50 nm. The background clean marine aerosol exhibited a clear bimodality in the sub-micron size distribution, with although it should be noted that either the Aitken mode or the accumulation mode may dominate the number concentration. However, peculiar background clean marine size distributions with coarser accumulation modes are also observed during winter months. By contrast, the continentally-influenced size distributions are generally more monomodal (accumulation, albeit with traces of bimodality. The open ocean category occurs more often during May, June and July, corresponding with the North East (NE Atlantic high biological period. Combined with the relatively high percentage frequency of occurrence (32.6%, this suggests that the marine biota is an important source of new nano aerosol particles in NE Atlantic Air.

  15. Lower crustal earthquakes in the North China Basin and implications for crustal rheology

    Science.gov (United States)

    Yuen, D. A.; Dong, Y.; Ni, S.; LI, Z.

    2017-12-01

    The North China Basin is a Mesozoic-Cenozoic continental rift basin on the eastern North China Craton. It is the central region of craton destruction, also a very seismically active area suffering severely from devastating earthquakes, such as the 1966 Xingtai M7.2 earthquake, the 1967 Hejian M6.3 earthquake, and the 1976 Tangshan M7.8 earthquake. We found remarkable discrepancies of depth distribution among the three earthquakes, for instance, the Xingtai and Tangshan earthquakes are both upper-crustal earthquakes occurring between 9 and 15 km on depth, but the depth of the Hejian earthquake was reported of about 30 72 km, ranging from lowermost crust to upper mantle. In order to investigate the focal depth of earthquakes near Hejian area, we developed a method to resolve focal depth for local earthquakes occurring beneath sedimentary regions by P and S converted waves. With this method, we obtained well-resolved depths of 44 local events with magnitudes between M1.0 and M3.0 during 2008 to 2016 at the Hejian seismic zone, with a mean depth uncertainty of about 2 km. The depth distribution shows abundant earthquakes at depth of 20 km, with some events in the lower crust, but absence of seismicity deeper than 25 km. In particular, we aimed at deducing some constraints on the local crustal rheology from depth-frequency distribution. Therefore, we performed a comparison between the depth-frequency distribution and the crustal strength envelop, and found a good fit between the depth profile in the Hejian seismic zone and the yield strength envelop in the Baikal Rift Systems. As a conclusion, we infer that the seismogenic thickness is 25 km and the main deformation mechanism is brittle fracture in the North China Basin . And we made two hypotheses: (1) the rheological layering of dominant rheology in the North China Basin is similar to that of the Baikal Rift Systems, which can be explained with a quartz rheology at 0 10 km depth and a diabase rheology at 10 35 km

  16. Global time-size distribution of volcanic eruptions on Earth.

    Science.gov (United States)

    Papale, Paolo

    2018-05-01

    Volcanic eruptions differ enormously in their size and impacts, ranging from quiet lava flow effusions along the volcano flanks to colossal events with the potential to affect our entire civilization. Knowledge of the time and size distribution of volcanic eruptions is of obvious relevance for understanding the dynamics and behavior of the Earth system, as well as for defining global volcanic risk. From the analysis of recent global databases of volcanic eruptions extending back to more than 2 million years, I show here that the return times of eruptions with similar magnitude follow an exponential distribution. The associated relative frequency of eruptions with different magnitude displays a power law, scale-invariant distribution over at least six orders of magnitude. These results suggest that similar mechanisms subtend to explosive eruptions from small to colossal, raising concerns on the theoretical possibility to predict the magnitude and impact of impending volcanic eruptions.

  17. Source modeling of the 2015 Mw 7.8 Nepal (Gorkha) earthquake sequence: Implications for geodynamics and earthquake hazards

    Science.gov (United States)

    McNamara, D. E.; Yeck, W. L.; Barnhart, W. D.; Schulte-Pelkum, V.; Bergman, E.; Adhikari, L. B.; Dixit, A.; Hough, S. E.; Benz, H. M.; Earle, P. S.

    2017-09-01

    The Gorkha earthquake on April 25th, 2015 was a long anticipated, low-angle thrust-faulting event on the shallow décollement between the India and Eurasia plates. We present a detailed multiple-event hypocenter relocation analysis of the Mw 7.8 Gorkha Nepal earthquake sequence, constrained by local seismic stations, and a geodetic rupture model based on InSAR and GPS data. We integrate these observations to place the Gorkha earthquake sequence into a seismotectonic context and evaluate potential earthquake hazard. Major results from this study include (1) a comprehensive catalog of calibrated hypocenters for the Gorkha earthquake sequence; (2) the Gorkha earthquake ruptured a 150 × 60 km patch of the Main Himalayan Thrust (MHT), the décollement defining the plate boundary at depth, over an area surrounding but predominantly north of the capital city of Kathmandu (3) the distribution of aftershock seismicity surrounds the mainshock maximum slip patch; (4) aftershocks occur at or below the mainshock rupture plane with depths generally increasing to the north beneath the higher Himalaya, possibly outlining a 10-15 km thick subduction channel between the overriding Eurasian and subducting Indian plates; (5) the largest Mw 7.3 aftershock and the highest concentration of aftershocks occurred to the southeast the mainshock rupture, on a segment of the MHT décollement that was positively stressed towards failure; (6) the near surface portion of the MHT south of Kathmandu shows no aftershocks or slip during the mainshock. Results from this study characterize the details of the Gorkha earthquake sequence and provide constraints on where earthquake hazard remains high, and thus where future, damaging earthquakes may occur in this densely populated region. Up-dip segments of the MHT should be considered to be high hazard for future damaging earthquakes.

  18. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  19. Frequency–magnitude distribution of -3.7 < M(subW) < 1 mining-induced earthquakes around a mining front and b value invariance with post-blast time

    CSIR Research Space (South Africa)

    Naoi, M

    2014-10-01

    Full Text Available Geophysics Frequency–Magnitude Distribution of -3.7 B MW B 1 mining-induced earthquakes around a mining front and b value invariance with post-blast time Makoto Naoi,1 Masao Nakatani,1 Shigeki Horiuchi,2 Yasuo Yabe,3 Joachim Philipp,4 Thabang Kgarume... Ogasawara 11 1 Earthquake Research Institute, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-0032, Japan. E-mail: naoi@eri.u-tokyo.ac.jp 2 Home Seismometer Corp., 4-36, Uenohara, Shirakawa, Fukushima 961-0026, Japan. 3 Research Center...

  20. The West Bohemian 2008-earthquake swarm: When, where, what size and data

    Czech Academy of Sciences Publication Activity Database

    Horálek, Josef; Fischer, Tomáš; Boušková, Alena; Michálek, Jan; Hrubcová, Pavla

    2009-01-01

    Roč. 53, č. 3 (2009), s. 351-358 ISSN 0039-3169 Institutional research plan: CEZ:AV0Z30120515 Keywords : West Bohemia/Vogtland * earthquake swarm * WEBNET Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.000, year: 2009

  1. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    International Nuclear Information System (INIS)

    Gunardi,; Setiawan, Ezra Putranda

    2015-01-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth’s tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, ‘act-of-God bond’, or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake’s magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, ‘coupon only at risk’ bond, and ‘principal and coupon at risk’ bond. Relationship between price of the catastrophe bond and CIR model’s parameter, GEV’s parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation

  2. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    Energy Technology Data Exchange (ETDEWEB)

    Gunardi,; Setiawan, Ezra Putranda [Mathematics Department, Gadjah Mada University (Indonesia)

    2015-12-22

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth’s tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, ‘act-of-God bond’, or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake’s magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, ‘coupon only at risk’ bond, and ‘principal and coupon at risk’ bond. Relationship between price of the catastrophe bond and CIR model’s parameter, GEV’s parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  3. Personnel neutron dosimetry applications of track-size distributions on electrochemically etched CR-39 foils

    International Nuclear Information System (INIS)

    Hankins, D.E.; Homann, S.G.; Westermark, J.

    1988-01-01

    The track-size distribution on electrochemically etched CR-39 foils can be used to obtain some limited information on the incident neutron spectra. Track-size distributions on CR-39 foils can also be used to determine if the tracks were caused by neutrons or if they are merely background tracks (which have a significantly different track-size distribution). Identifying and discarding the high-background foils reduces the number of foils that must be etched. This also lowers the detection limit of the dosimetry system. We have developed an image analyzer program that can more efficiently determine the track density and track-size distribution, as well as read the laser-cut identification numbers on each foil. This new image analyzer makes the routine application of track-size distributions on CR-39 foils feasible. 2 refs., 3 figs

  4. Analysis of tecniques for measurement of the size distribution of solid particles

    Directory of Open Access Journals (Sweden)

    F. O. Arouca

    2005-03-01

    Full Text Available Determination of the size distribution of solid particles is fundamental for analysis of the performance several pieces of equipment used for solid-fluid separation. The main objective of this work is to compare the results obtained with two traditional methods for determination of the size grade distribution of powdery solids: the gamma-ray attenuation technique (GRAT and the LADEQ test tube technique. The effect of draining the suspension in the two techniques used was also analyzed. The GRAT can supply the particle size distribution of solids through the monitoring of solid concentration in experiments on batch settling of diluted suspensions. The results show that use of the peristaltic pump in the GRAT and the LADEQ methods produced a significant difference between the values obtained for the parameters of the particle size model.

  5. Earthquakes

    Science.gov (United States)

    An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

  6. The characteristic of the building damage from historical large earthquakes in Kyoto

    Science.gov (United States)

    Nishiyama, Akihito

    2016-04-01

    The Kyoto city, which is located in the northern part of Kyoto basin in Japan, has a long history of >1,200 years since the city was initially constructed. The city has been a populated area with many buildings and the center of the politics, economy and culture in Japan for nearly 1,000 years. Some of these buildings are now subscribed as the world's cultural heritage. The Kyoto city has experienced six damaging large earthquakes during the historical period: i.e., in 976, 1185, 1449, 1596, 1662, and 1830. Among these, the last three earthquakes which caused severe damage in Kyoto occurred during the period in which the urban area had expanded. These earthquakes are considered to be inland earthquakes which occurred around the Kyoto basin. The damage distribution in Kyoto from historical large earthquakes is strongly controlled by ground condition and earthquakes resistance of buildings rather than distance from estimated source fault. Therefore, it is necessary to consider not only the strength of ground shaking but also the condition of building such as elapsed years since the construction or last repair in order to more accurately and reliably estimate seismic intensity distribution from historical earthquakes in Kyoto. The obtained seismic intensity map would be helpful for reducing and mitigating disaster from future large earthquakes.

  7. Common floor system vertical earthquake-proof structure for reactor equipment

    International Nuclear Information System (INIS)

    Morishita, Masaki.

    1996-01-01

    In an LMFBR type reactor, a reactor container, a recycling pump and a heat exchanger are disposed on a common floor. Vertical earthquake-proof devices which can be stretched only in vertical direction formed by laminating large-sized bellevilles are disposed on a concrete wall at the circumference of each of reactor equipments. A common floor is placed on all of the vertical earthquake-proof devices to support the entire earthquake-proof structure simultaneously. If each of reactor equipments is loaded on the common floor and the common floor is entirely supported against earthquakes altogether, since the movement of each of the reactor equipments loaded on the common floor is identical, relative dislocation is not exerted on the main pipelines which connect the equipments. In addition, since the entire earthquake structure has a flat common floor and each of the reactor equipments is suspended to minimize the distance between a gravitational center and a support point, locking vibration is less caused to the horizontal earthquake. (N.H.)

  8. Evaluating earthquake hazards in the Los Angeles region; an earth-science perspective

    Science.gov (United States)

    Ziony, Joseph I.

    1985-01-01

    Potentially destructive earthquakes are inevitable in the Los Angeles region of California, but hazards prediction can provide a basis for reducing damage and loss. This volume identifies the principal geologically controlled earthquake hazards of the region (surface faulting, strong shaking, ground failure, and tsunamis), summarizes methods for characterizing their extent and severity, and suggests opportunities for their reduction. Two systems of active faults generate earthquakes in the Los Angeles region: northwest-trending, chiefly horizontal-slip faults, such as the San Andreas, and west-trending, chiefly vertical-slip faults, such as those of the Transverse Ranges. Faults in these two systems have produced more than 40 damaging earthquakes since 1800. Ninety-five faults have slipped in late Quaternary time (approximately the past 750,000 yr) and are judged capable of generating future moderate to large earthquakes and displacing the ground surface. Average rates of late Quaternary slip or separation along these faults provide an index of their relative activity. The San Andreas and San Jacinto faults have slip rates measured in tens of millimeters per year, but most other faults have rates of about 1 mm/yr or less. Intermediate rates of as much as 6 mm/yr characterize a belt of Transverse Ranges faults that extends from near Santa Barbara to near San Bernardino. The dimensions of late Quaternary faults provide a basis for estimating the maximum sizes of likely future earthquakes in the Los Angeles region: moment magnitude .(M) 8 for the San Andreas, M 7 for the other northwest-trending elements of that fault system, and M 7.5 for the Transverse Ranges faults. Geologic and seismologic evidence along these faults, however, suggests that, for planning and designing noncritical facilities, appropriate sizes would be M 8 for the San Andreas, M 7 for the San Jacinto, M 6.5 for other northwest-trending faults, and M 6.5 to 7 for the Transverse Ranges faults. The

  9. On the size distribution of one-, two- and three-dimensional Voronoi cells

    International Nuclear Information System (INIS)

    Marthinsen, K.

    1994-03-01

    The present report gives a presentation of the different cell size distribution obtained by computer simulations of random Voronoi cell structures in one-, two- and three-dimensional space. The random Voronoi cells are constructed from cell centroids randomly distributed along a string, in the plane and in three-dimensional space, respectively. The size distributions are based on 2-3 · 10 4 cells. For the spacial polyhedra both the distribution of volumes, areas and radii are presented, and the two latter quantities are compared to the distributions of areas and radii from a planar section through the three-dimensional structure as well as to the corresponding distributions obtained from a pure two-dimensional cell structure. 11 refs., 11 figs

  10. Demonstration of the Cascadia G‐FAST geodetic earthquake early warning system for the Nisqually, Washington, earthquake

    Science.gov (United States)

    Crowell, Brendan; Schmidt, David; Bodin, Paul; Vidale, John; Gomberg, Joan S.; Hartog, Renate; Kress, Victor; Melbourne, Tim; Santillian, Marcelo; Minson, Sarah E.; Jamison, Dylan

    2016-01-01

    A prototype earthquake early warning (EEW) system is currently in development in the Pacific Northwest. We have taken a two‐stage approach to EEW: (1) detection and initial characterization using strong‐motion data with the Earthquake Alarm Systems (ElarmS) seismic early warning package and (2) the triggering of geodetic modeling modules using Global Navigation Satellite Systems data that help provide robust estimates of large‐magnitude earthquakes. In this article we demonstrate the performance of the latter, the Geodetic First Approximation of Size and Time (G‐FAST) geodetic early warning system, using simulated displacements for the 2001Mw 6.8 Nisqually earthquake. We test the timing and performance of the two G‐FAST source characterization modules, peak ground displacement scaling, and Centroid Moment Tensor‐driven finite‐fault‐slip modeling under ideal, latent, noisy, and incomplete data conditions. We show good agreement between source parameters computed by G‐FAST with previously published and postprocessed seismic and geodetic results for all test cases and modeling modules, and we discuss the challenges with integration into the U.S. Geological Survey’s ShakeAlert EEW system.

  11. The Challenge of Centennial Earthquakes to Improve Modern Earthquake Engineering

    International Nuclear Information System (INIS)

    Saragoni, G. Rodolfo

    2008-01-01

    The recent commemoration of the centennial of the San Francisco and Valparaiso 1906 earthquakes has given the opportunity to reanalyze their damages from modern earthquake engineering perspective. These two earthquakes plus Messina Reggio Calabria 1908 had a strong impact in the birth and developing of earthquake engineering. The study of the seismic performance of some up today existing buildings, that survive centennial earthquakes, represent a challenge to better understand the limitations of our in use earthquake design methods. Only Valparaiso 1906 earthquake, of the three considered centennial earthquakes, has been repeated again as the Central Chile, 1985, Ms = 7.8 earthquake. In this paper a comparative study of the damage produced by 1906 and 1985 Valparaiso earthquakes is done in the neighborhood of Valparaiso harbor. In this study the only three centennial buildings of 3 stories that survived both earthquakes almost undamaged were identified. Since for 1985 earthquake accelerogram at El Almendral soil conditions as well as in rock were recoded, the vulnerability analysis of these building is done considering instrumental measurements of the demand. The study concludes that good performance of these buildings in the epicentral zone of large earthquakes can not be well explained by modern earthquake engineering methods. Therefore, it is recommended to use in the future of more suitable instrumental parameters, such as the destructiveness potential factor, to describe earthquake demand

  12. Polydisperse-particle-size-distribution function determined from intensity profile of angularly scattered light

    International Nuclear Information System (INIS)

    Alger, T.W.

    1979-01-01

    A new method for determining the particle-size-distribution function of a polydispersion of spherical particles is presented. The inversion technique for the particle-size-distribution function is based upon matching the measured intensity profile of angularly scattered light with a summation of the intensity contributions of a series of appropriately spaced, narrowband, size-distribution functions. A numerical optimization technique is used to determine the strengths of the individual bands that yield the best agreement with the measured scattered-light-intensity profile. Because Mie theory is used, the method is applicable to spherical particles of all sizes. Several numerical examples demonstrate the application of this inversion method

  13. Thermal and particle size distribution effects on the ferromagnetic resonance in magnetic fluids

    International Nuclear Information System (INIS)

    Marin, C.N.

    2006-01-01

    Thermal and particle size distribution effects on the ferromagnetic resonance of magnetic fluids were theoretically investigated, assuming negligible interparticle interactions and neglecting the viscosity of the carrier liquid. The model is based on the usual approach for the ferromagnetic resonance description of single-domain magnetic particle systems, which was amended in order to take into account the finite particle size effect, the particle size distribution and the orientation mobility of the particles within the magnetic fluid. Under these circumstances the shape of the resonance line, the resonance field and the line width are found to be strongly affected by the temperature and by the particle size distribution of magnetic fluids

  14. Quantification of the evolution of firm size distributions due to mergers and acquisitions

    Science.gov (United States)

    Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683

  15. Quantification of the evolution of firm size distributions due to mergers and acquisitions.

    Science.gov (United States)

    Lera, Sandro Claudio; Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.

  16. Quantification of the evolution of firm size distributions due to mergers and acquisitions.

    Directory of Open Access Journals (Sweden)

    Sandro Claudio Lera

    Full Text Available The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth. Thereby, the impact of mergers and acquisitions (M&A on the firm size (external growth is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.

  17. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  18. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Science.gov (United States)

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  19. Statistical aspects and risks of human-caused earthquakes

    Science.gov (United States)

    Klose, C. D.

    2013-12-01

    The seismological community invests ample human capital and financial resources to research and predict risks associated with earthquakes. Industries such as the insurance and re-insurance sector are equally interested in using probabilistic risk models developed by the scientific community to transfer risks. These models are used to predict expected losses due to naturally occurring earthquakes. But what about the risks associated with human-caused earthquakes? Such risk models are largely absent from both industry and academic discourse. In countries around the world, informed citizens are becoming increasingly aware and concerned that this economic bias is not sustainable for long-term economic growth, environmental and human security. Ultimately, citizens look to their government officials to hold industry accountable. In the Netherlands, for example, the hydrocarbon industry is held accountable for causing earthquakes near Groningen. In Switzerland, geothermal power plants were shut down or suspended because they caused earthquakes in canton Basel and St. Gallen. The public and the private non-extractive industry needs access to information about earthquake risks in connection with sub/urban geoengineeing activities, including natural gas production through fracking, geothermal energy production, carbon sequestration, mining and water irrigation. This presentation illuminates statistical aspects of human-caused earthquakes with respect to different geologic environments. Statistical findings are based on the first catalog of human-caused earthquakes (in Klose 2013). Findings are discussed which include the odds to die during a medium-size earthquake that is set off by geomechanical pollution. Any kind of geoengineering activity causes this type of pollution and increases the likelihood of triggering nearby faults to rupture.

  20. Tectonics earthquake distribution pattern analysis based focal mechanisms (Case study Sulawesi Island, 1993–2012)

    International Nuclear Information System (INIS)

    Ismullah M, Muh. Fawzy; Lantu,; Aswad, Sabrianto; Massinai, Muh. Altin

    2015-01-01

    Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo – Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake data which consist of: the origin time, the epicenter coordinate, the depth, the magnitude and the fault parameter (strike, dip and slip). The result of research shows that there are a lot of active structures as a reason of the earthquake in Sulawesi. The active structures are Walannae Fault, Lawanopo Fault, Matano Fault, Palu – Koro Fault, Batui Fault and Moluccas Sea Double Subduction. The focal mechanism also shows that Walannae Fault, Batui Fault and Moluccas Sea Double Subduction are kind of reverse fault. While Lawanopo Fault, Matano Fault and Palu – Koro Fault are kind of strike slip fault

  1. Tectonics earthquake distribution pattern analysis based focal mechanisms (Case study Sulawesi Island, 1993–2012)

    Energy Technology Data Exchange (ETDEWEB)

    Ismullah M, Muh. Fawzy, E-mail: mallaniung@gmail.com [Master Program Geophysical Engineering, Faculty of Mining and Petroleum Engineering (FTTM), Bandung Institute of Technology (ITB), Jl. Ganesha no. 10, Bandung, 40116, Jawa Barat (Indonesia); Lantu,; Aswad, Sabrianto; Massinai, Muh. Altin [Geophysics Program Study, Faculty of Mathematics and Natural Sciences, Hasanuddin University (UNHAS), Jl. PerintisKemerdekaan Km. 10, Makassar, 90245, Sulawesi Selatan (Indonesia)

    2015-04-24

    Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo – Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake data which consist of: the origin time, the epicenter coordinate, the depth, the magnitude and the fault parameter (strike, dip and slip). The result of research shows that there are a lot of active structures as a reason of the earthquake in Sulawesi. The active structures are Walannae Fault, Lawanopo Fault, Matano Fault, Palu – Koro Fault, Batui Fault and Moluccas Sea Double Subduction. The focal mechanism also shows that Walannae Fault, Batui Fault and Moluccas Sea Double Subduction are kind of reverse fault. While Lawanopo Fault, Matano Fault and Palu – Koro Fault are kind of strike slip fault.

  2. Effect of head size on 10B dose distribution

    International Nuclear Information System (INIS)

    Gupta, N.; Blue, T.E.; Gahbauer, R.

    1992-01-01

    Boron neutron capture therapy (BNCT) for treatment of brain tumors is based on the utilization of large epithermal-neutron fields. Epithermal neutrons thermalize at depths of ∼2.5 cm inside the head and provide a maximum thermal fluence at deep-seated tumor sites with minimum damage to normal tissue. Brain tissue is a highly scattering medium for epithermal and thermal neutrons; therefore, a broad treatment field enables epithermal neutrons to enter the head over a large area. These neutrons slow down as they undergo scattering collisions and contribute to the thermal-neutron fluence at the tumor location. With the use of large neutron fields, the size of the head affects the thermal-neutron distribution and thereby the 10 B absorbed dose distribution inside the head. In this paper, the authors describe measurements using a boron trifluoride (BF 3 )-filled proportional counter to determine the effect of head size on 10 B absorbed dose distributions for a broad field accelerator epithermal-neutron source

  3. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  4. Size distribution of radon daughter particles in uranium mine atmospheres

    International Nuclear Information System (INIS)

    George, A.C.; Hinchliffe, L.; Sladowski, R.

    1975-01-01

    The size distribution of radon daughters was measured in several uranium mines using four compact diffusion batteries and a round jet cascade impactor. Simultaneously, measurements were made of uncombined fractions of radon daughters, radon concentration, working level, and particle concentration. The size distributions found for radon daughters were log normal. The activity median diameters ranged from 0.09 μm to 0.3 μm with a mean value of 0.17 μm. Geometric standard deviations were in the range from 1.3 to 4 with a mean value of 2.7. Uncombined fractions expressed in accordance with the ICRP definition ranged from 0.004 to 0.16 with a mean value of 0.04. The radon daughter sizes in these mines are greater than the sizes assumed by various authors in calculating respiratory tract dose. The disparity may reflect the widening use of diesel-powered equipment in large uranium mines. (U.S.)

  5. Connection between the growth rate distribution and the size dependent crystal growth

    Science.gov (United States)

    Mitrović, M. M.; Žekić, A. A.; IIić, Z. Z.

    2002-07-01

    The results of investigations of the connection between the growth rate dispersions and the size dependent crystal growth of potassium dihydrogen phosphate (KDP), Rochelle salt (RS) and sodium chlorate (SC) are presented. A possible way out of the existing confusion in the size dependent crystal growth investigations is suggested. It is shown that the size independent growth exists if the crystals belonging to one growth rate distribution maximum are considered separately. The investigations suggest possible reason for the observed distribution maxima widths, and the high data scattering on the growth rate versus the crystal size dependence.

  6. The pathway to earthquake early warning in the US

    Science.gov (United States)

    Allen, R. M.; Given, D. D.; Heaton, T. H.; Vidale, J. E.; West Coast Earthquake Early Warning Development Team

    2013-05-01

    The development of earthquake early warning capabilities in the United States is now accelerating and expanding as the technical capability to provide warning is demonstrated and additional funding resources are making it possible to expand the current testing region to the entire west coast (California, Oregon and Washington). Over the course of the next two years we plan to build a prototype system that will provide a blueprint for a full public system in the US. California currently has a demonstrations warning system, ShakeAlert, that provides alerts to a group of test users from the public and private sector. These include biotech companies, technology companies, the entertainment industry, the transportation sector, and the emergency planning and response community. Most groups are currently in an evaluation mode, receiving the alerts and developing protocols for future response. The Bay Area Rapid Transit (BART) system is the one group who has now implemented an automated response to the warning system. BART now stops trains when an earthquake of sufficient size is detected. Research and development also continues to develop improved early warning algorithms to better predict the distribution of shaking in large earthquakes when the finiteness of the source becomes important. The algorithms under development include the use of both seismic and GPS instrumentation and integration with existing point source algorithms. At the same time, initial testing and development of algorithms in and for the Pacific Northwest is underway. In this presentation we will review the current status of the systems, highlight the new research developments, and lay out a pathway to a full public system for the US west coast. The research and development described is ongoing at Caltech, UC Berkeley, University of Washington, ETH Zurich, Southern California Earthquake Center, and the US Geological Survey, and is funded by the Gordon and Betty Moore Foundation and the US Geological

  7. Twitter Seismology: Earthquake Monitoring and Response in a Social World

    Science.gov (United States)

    Bowden, D. C.; Earle, P. S.; Guy, M.; Smoczyk, G.

    2011-12-01

    The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment USGS earthquake response products and the delivery of hazard information. The potential uses of Twitter for earthquake response include broadcasting earthquake alerts, rapidly detecting widely felt events, qualitatively assessing earthquake damage effects, communicating with the public, and participating in post-event collaboration. Several seismic networks and agencies are currently distributing Twitter earthquake alerts including the European-Mediterranean Seismological Centre (@LastQuake), Natural Resources Canada (@CANADAquakes), and the Indonesian meteorological agency (@infogempabmg); the USGS will soon distribute alerts via the @USGSted and @USGSbigquakes Twitter accounts. Beyond broadcasting alerts, the USGS is investigating how to use tweets that originate near the epicenter to detect and characterize shaking events. This is possible because people begin tweeting immediately after feeling an earthquake, and their short narratives and exclamations are available for analysis within 10's of seconds of the origin time. Using five months of tweets that contain the word "earthquake" and its equivalent in other languages, we generate a tweet-frequency time series. The time series clearly shows large peaks correlated with the origin times of widely felt events. To identify possible earthquakes, we use a simple Short-Term-Average / Long-Term-Average algorithm similar to that commonly used to detect seismic phases. As with most auto-detection algorithms, the parameters can be tuned to catch more or less events at the cost of more or less false triggers. When tuned to a moderate sensitivity, the detector found 48 globally-distributed, confirmed seismic events with only 2 false triggers. A space-shuttle landing and "The Great California ShakeOut" caused the false triggers. This number of

  8. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  9. Improved Root Normal Size Distributions for Liquid Atomization

    Science.gov (United States)

    2015-11-01

    ANSI Std. Z39.18 j CONVERSION TABLE Conversion Factors for U.S. Customary to metric (SI) units of measurement. MULTIPLY BY TO...Gray (Gy) coulomb /kilogram (C/kg) second (s) kilogram (kg) kilo pascal (kPa) 1 Improved Root Normal Size Distributions for Liquid

  10. Health Problems and Community Participation Issues in the Earthquake of 2012, East Azerbaijan Province

    Directory of Open Access Journals (Sweden)

    Mohamad Mosaferi

    2015-08-01

    Full Text Available Background and Objectives : Earthquake of East Azerbaijan with magnitude of 6.3 to 6.4 on the Richter scale, impressed the cities of Varzegan, Ahar and Heris on 11 August 2012 which left 306 victims and more than 8000 billion Rials cost, caused irreparable damages. Present study aims to investigate and present an analysis of relief performance and also health, environmental and safety aspects after earthquake. Material and Methods : Required data were gathered during the early days after earthquake via presence and observation in the affected areas. Besides, opinions of health experts were collected through interviews. The rest of required information was collected from websites and publications. Results : In the following days after the earthquake, coordination between government offices was at a low level and duties were not clear. Lack of correct statistics of permanent and non-permanent residents of villages caused many problems in the construction of new houses. A significant feature of recent earthquake was the approach of community participation; so that they personally distributed the humanitarian aids to the quake-hit areas instead of delivering aids through governmental offices which had its own advantages and disadvantages. Absence of specific responsible until a week after earthquake for the installation of sanitary toilets was a significant problem in the earthquake areas. Other problems included the difficulties associated with the distribution of tents, solid waste collection, distribution of excessive bottled water and its improper storage, and the disposal of demolition waste in the natural drainages. Conclusion : The situation after the earthquake indicates that despite the presence of government forces in the earthquake affected areas, there were obvious problems especially in field of sanitary which need an integrated planning for relief after earthquake.  ​

  11. Prediction of site specific ground motion for large earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1990-01-01

    In this paper, we apply the semi-empirical synthesis method by IRIKURA (1983, 1986) to the estimation of site specific ground motion using accelerograms observed at Kumatori in Osaka prefecture. Target earthquakes used here are a comparatively distant earthquake (Δ=95 km, M=5.6) caused by the YAMASAKI fault and a near earthquake (Δ=27 km, M=5.6). The results obtained are as follows. 1) The accelerograms from the distant earthquake (M=5.6) are synthesized using the aftershock records (M=4.3) for 1983 YAMASAKI fault earthquake whose source parameters have been obtained by other authors from the hypocentral distribution of the aftershocks. The resultant synthetic motions show a good agreement with the observed ones. 2) The synthesis for a near earthquake (M=5.6, we call this target earthquake) are made using a small earthquake which occurred in the neighborhood of the target earthquake. Here, we apply two methods for giving the parameters for synthesis. One method is to use the parameters of YAMASAKI fault earthquake which has the same magnitude as the target earthquake, and the other is to use the parameters obtained from several existing empirical formulas. The resultant synthetic motion with the former parameters shows a good agreement with the observed one, but that with the latter does not. 3) We estimate the source parameters from the source spectra of several earthquakes which have been observed in this site. Consequently we find that the small earthquakes (M<4) as Green's functions should be carefully used because the stress drops are not constant. 4) We propose that we should designate not only the magnitudes but also seismic moments of the target earthquake and the small earthquake. (J.P.N.)

  12. Experimental equivalent cluster-size distributions in nano-metric volumes of liquid water

    International Nuclear Information System (INIS)

    Grosswendt, B.; De Nardo, L.; Colautti, P.; Pszona, S.; Conte, V.; Tornielli, G.

    2004-01-01

    Ionisation cluster-size distributions in nano-metric volumes of liquid water were determined for alpha particles at 4.6 and 5.4 MeV by measuring cluster-size frequencies in small gaseous volumes of nitrogen or propane at low gas pressure as well as by applying a suitable scaling procedure. This scaling procedure was based on the mean free ionisation lengths of alpha particles in water and in the gases measured. For validation, the measurements of cluster sizes in gaseous volumes and the cluster-size formation in volumes of liquid water of equivalent size were simulated by Monte Carlo methods. The experimental water-equivalent cluster-size distributions in nitrogen and propane are compared with those in liquid water and show that cluster-size formation by alpha particles in nitrogen or propane can directly be related to those in liquid water. (authors)

  13. Clustered and transient earthquake sequences in mid-continents

    Science.gov (United States)

    Liu, M.; Stein, S. A.; Wang, H.; Luo, G.

    2012-12-01

    Earthquakes result from sudden release of strain energy on faults. On plate boundary faults, strain energy is constantly accumulating from steady and relatively rapid relative plate motion, so large earthquakes continue to occur so long as motion continues on the boundary. In contrast, such steady accumulation of stain energy does not occur on faults in mid-continents, because the far-field tectonic loading is not steadily distributed between faults, and because stress perturbations from complex fault interactions and other stress triggers can be significant relative to the slow tectonic stressing. Consequently, mid-continental earthquakes are often temporally clustered and transient, and spatially migrating. This behavior is well illustrated by large earthquakes in North China in the past two millennia, during which no single large earthquakes repeated on the same fault segments, but moment release between large fault systems was complementary. Slow tectonic loading in mid-continents also causes long aftershock sequences. We show that the recent small earthquakes in the Tangshan region of North China are aftershocks of the 1976 Tangshan earthquake (M 7.5), rather than indicators of a new phase of seismic activity in North China, as many fear. Understanding the transient behavior of mid-continental earthquakes has important implications for assessing earthquake hazards. The sequence of large earthquakes in the New Madrid Seismic Zone (NMSZ) in central US, which includes a cluster of M~7 events in 1811-1812 and perhaps a few similar ones in the past millennium, is likely a transient process, releasing previously accumulated elastic strain on recently activated faults. If so, this earthquake sequence will eventually end. Using simple analysis and numerical modeling, we show that the large NMSZ earthquakes may be ending now or in the near future.

  14. Sectional modeling of nanoparticle size and charge distributions in dusty plasmas

    International Nuclear Information System (INIS)

    Agarwal, Pulkit; Girshick, Steven L

    2012-01-01

    Sectional models of the dynamics of aerosol populations are well established in the aerosol literature but have received relatively less attention in numerical models of dusty plasmas, where most modeling studies have assumed the existence of monodisperse dust particles. In the case of plasmas in which nanoparticles nucleate and grow, significant polydispersity can exist in particle size distributions, and stochastic charging can cause particles of given size to have a broad distribution of charge states. Sectional models, while computationally expensive, are well suited to treating such distributions. This paper presents an overview of sectional modeling of nanodusty plasmas, and presents examples of simulation results that reveal important qualitative features of the spatiotemporal evolution of such plasmas, many of which could not be revealed by models that consider only monodisperse dust particles and average particle charge. These features include the emergence of bimodal particle populations consisting of very small neutral particles and larger negatively charged particles, the effects of size and charge distributions on coagulation, spreading and structure of the particle cloud, and the dynamics of dusty plasma afterglows. (paper)

  15. Effect of particle size and distribution of the sizing agent on the carbon fibers surface and interfacial shear strength (IFSS) of its composites

    International Nuclear Information System (INIS)

    Zhang, R.L.; Liu, Y.; Huang, Y.D.; Liu, L.

    2013-01-01

    Effect of particle size and distribution of the sizing agent on the performance of carbon fiber and carbon fiber composites has been investigated. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) were used to characterize carbon fiber surface topographies. At the same time, the single fiber strength and Weibull distribution were also studied in order to investigate the effect of coatings on the fibers. The interfacial shear strength and hygrothermal aging of the carbon fiber/epoxy resin composites were also measured. The results indicated that the particle size and distribution is important for improving the surface of carbon fibers and its composites performance. Different particle size and distribution of sizing agent has different contribution to the wetting performance of carbon fibers. The fibers sized with P-2 had higher value of IFSS and better hygrothermal aging resistant properties.

  16. Numerical experiment on tsunami deposit distribution process by using tsunami sediment transport model in historical tsunami event of megathrust Nankai trough earthquake

    Science.gov (United States)

    Imai, K.; Sugawara, D.; Takahashi, T.

    2017-12-01

    A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data.

  17. X-ray diffraction microstructural analysis of bimodal size distribution MgO nano powder

    International Nuclear Information System (INIS)

    Suminar Pratapa; Budi Hartono

    2009-01-01

    Investigation on the characteristics of x-ray diffraction data for MgO powdered mixture of nano and sub-nano particles has been carried out to reveal the crystallite-size-related microstructural information. The MgO powders were prepared by co-precipitation method followed by heat treatment at 500 degree Celsius and 1200 degree Celsius for 1 hour, being the difference in the temperature was to obtain two powders with distinct crystallite size and size-distribution. The powders were then blended in air to give the presumably bimodal-size- distribution MgO nano powder. High-quality laboratory X-ray diffraction data for the powders were collected and then analysed using Rietveld-based MAUD software using the lognormal size distribution. Results show that the single-mode powders exhibit spherical crystallite size (R) of 20(1) nm and 160(1) nm for the 500 degree Celsius and 1200 degree Celsius data respectively with the nano metric powder displays narrower crystallite size distribution character, indicated by lognormal dispersion parameter of 0.21 as compared to 0.01 for the sub-nano metric powder. The mixture exhibits relatively more asymmetric peak broadening. Analysing the x-ray diffraction data for the latter specimen using single phase approach give unrealistic results. Introducing two phase models for the double-phase mixture to accommodate the bimodal-size-distribution characteristics give R = 100(6) and σ = 0.62 for the nano metric phase and R = 170(5) and σ= 0.12 for the σ sub-nano metric phase. (author)

  18. Surface properties, more than size, limiting convective distribution of virus-sized particles and viruses in the central nervous system.

    Science.gov (United States)

    Chen, Michael Y; Hoffer, Alan; Morrison, Paul F; Hamilton, John F; Hughes, Jeffrey; Schlageter, Kurt S; Lee, Jeongwu; Kelly, Brandon R; Oldfield, Edward H

    2005-08-01

    Achieving distribution of gene-carrying vectors is a major barrier to the clinical application of gene therapy. Because of the blood-brain barrier, the distribution of genetic vectors to the central nervous system (CNS) is even more challenging than delivery to other tissues. Direct intraparenchymal microinfusion, a minimally invasive technique, uses bulk flow (convection) to distribute suspensions of macromolecules widely through the extracellular space (convection-enhanced delivery [CED]). Although acute injection into solid tissue is often used for delivery of oligonucleotides, viruses, and liposomes, and there is preliminary evidence that certain of these large particles can spread through the interstitial space of the brain by the use of convection, the use of CED for distribution of viruses in the brain has not been systematically examined. That is the goal of this study. Investigators used a rodent model to examine the influence of size, osmolarity of buffering solutions, and surface coating on the volumetric distribution of virus-sized nanoparticles and viruses (adeno-associated viruses and adenoviruses) in the gray matter of the brain. The results demonstrate that channels in the extracellular space of gray matter in the brain are large enough to accommodate virus-sized particles and that the surface characteristics are critical determinants for distribution of viruses in the brain by convection. These results indicate that convective distribution can be used to distribute therapeutic viral vectors in the CNS.

  19. Collaboratory for the Study of Earthquake Predictability

    Science.gov (United States)

    Schorlemmer, D.; Jordan, T. H.; Zechar, J. D.; Gerstenberger, M. C.; Wiemer, S.; Maechling, P. J.

    2006-12-01

    Earthquake prediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquake predictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquake predictability. This Collaboratory for the Study of Earthquake Predictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquake predictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquake prediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

  20. Mexican Earthquakes and Tsunamis Catalog Reviewed

    Science.gov (United States)

    Ramirez-Herrera, M. T.; Castillo-Aja, R.

    2015-12-01

    Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and

  1. Earthquake prediction

    International Nuclear Information System (INIS)

    Ward, P.L.

    1978-01-01

    The state of the art of earthquake prediction is summarized, the possible responses to such prediction are examined, and some needs in the present prediction program and in research related to use of this new technology are reviewed. Three basic aspects of earthquake prediction are discussed: location of the areas where large earthquakes are most likely to occur, observation within these areas of measurable changes (earthquake precursors) and determination of the area and time over which the earthquake will occur, and development of models of the earthquake source in order to interpret the precursors reliably. 6 figures

  2. Spatial distribution of the earthquakes in the Vrancea zone and tectonic correlations

    International Nuclear Information System (INIS)

    Bala, Andrei; Diaconescu, Mihai; Biter, Mircea

    2001-01-01

    The tectonic plate evolution of the whole Carpathian Arc and Pannonian back-arc Basin indicates that at least three tectonic units have been in contact and at the same time in relative motion: the East European Plate, the Moesian plate and the Intra-Alpine plate. There were plotted graphically all the earthquake hypocentres from the period 1982-2000 situated in an area which includes Vrancea zone. Because of the great number of events plotted, they were found to describe well the limits of the tectonic plate (plate fragment?) which is supposed to be subducted in this region down to 200 km depth. The hypothesis of a plate fragment delaminated from an older subduction can not be overruled. These limits were put in direct relations with the known geology and tectonics of the area. Available fault plane solutions for the crustal earthquakes are analyzed in correlation with the main faults of the area. A graphic plot of the sunspot number is correlated with the occurrence of the earthquakes with magnitudes greater than 5. (authors)

  3. Do detailed simulations with size-resolved microphysics reproduce basic features of observed cirrus ice size distributions?

    Science.gov (United States)

    Fridlind, A. M.; Atlas, R.; van Diedenhoven, B.; Ackerman, A. S.; Rind, D. H.; Harrington, J. Y.; McFarquhar, G. M.; Um, J.; Jackson, R.; Lawson, P.

    2017-12-01

    It has recently been suggested that seeding synoptic cirrus could have desirable characteristics as a geoengineering approach, but surprisingly large uncertainties remain in the fundamental parameters that govern cirrus properties, such as mass accommodation coefficient, ice crystal physical properties, aggregation efficiency, and ice nucleation rate from typical upper tropospheric aerosol. Only one synoptic cirrus model intercomparison study has been published to date, and studies that compare the shapes of observed and simulated ice size distributions remain sparse. Here we amend a recent model intercomparison setup using observations during two 2010 SPARTICUS campaign flights. We take a quasi-Lagrangian column approach and introduce an ensemble of gravity wave scenarios derived from collocated Doppler cloud radar retrievals of vertical wind speed. We use ice crystal properties derived from in situ cloud particle images, for the first time allowing smoothly varying and internally consistent treatments of nonspherical ice capacitance, fall speed, gravitational collection, and optical properties over all particle sizes in our model. We test two new parameterizations for mass accommodation coefficient as a function of size, temperature and water vapor supersaturation, and several ice nucleation scenarios. Comparison of results with in situ ice particle size distribution data, corrected using state-of-the-art algorithms to remove shattering artifacts, indicate that poorly constrained uncertainties in the number concentration of crystals smaller than 100 µm in maximum dimension still prohibit distinguishing which parameter combinations are more realistic. When projected area is concentrated at such sizes, the only parameter combination that reproduces observed size distribution properties uses a fixed mass accommodation coefficient of 0.01, on the low end of recently reported values. No simulations reproduce the observed abundance of such small crystals when the

  4. Particles size distribution effect on 3D packing of nanoparticles in to a bounded region

    International Nuclear Information System (INIS)

    Farzalipour Tabriz, M.; Salehpoor, P.; Esmaielzadeh Kandjani, A.; Vaezi, M. R.; Sadrnezhaad, S. K.

    2007-01-01

    In this paper, the effects of two different Particle Size Distributions on packing behavior of ideal rigid spherical nanoparticles using a novel packing model based on parallel algorithms have been reported. A mersenne twister algorithm was used to generate pseudo random numbers for the particles initial coordinates. Also, for this purpose a nano sized tetragonal confined container with a square floor (300 * 300 nm) were used in this work. The Andreasen and the Lognormal Particle Size Distributions were chosen to investigate the packing behavior in a 3D bounded region. The effects of particle numbers on packing behavior of these two Particle Size Distributions have been investigated. Also the reproducibility and the distribution of packing factor of these Particle Size Distributions were compared

  5. Grain Size Distribution in Mudstones: A Question of Nature vs. Nurture

    Science.gov (United States)

    Schieber, J.

    2011-12-01

    Grain size distribution in mudstones is affected by the composition of the source material, the processes of transport and deposition, and post-depositional diagenetic modification. With regard to source, it does make a difference whether for example a slate belt is eroded vs a stable craton. The former setting tends to provide a broad range of detrital quartz in the sub 62 micron size range in addition to clays and greenschist grade rock fragments, whereas the latter may be biased towards coarser quartz silt (30-60 microns), in addition to clays and mica flakes. In flume experiments, when fine grained materials are transported in turbulent flows at velocities that allow floccules to transfer to bedload, a systematic shift of grain size distribution towards an increasingly finer grained suspended load is observed as velocity is lowered. This implies that the bedload floccules are initially constructed of only the coarsest clay particles at high velocities, and that finer clay particles become incorporated into floccules as velocity is lowered. Implications for the rock record are that clay beds deposited from decelerating flows should show subtle internal grading of coarser clay particles; and that clay beds deposited from continuous fast flows should show a uniform distribution of coarse clays. Still water settled clays should show a well developed lower (coarser) and upper (finer) subdivision. A final complication arises when diagenetic processes, such as the dissolution of biogenic silica, give rise to diagenetic quartz grains in the silt to sand size range. This diagenetic silica precipitates in fossil cavities and pore spaces of uncompacted muds, and on casual inspection can be mistaken for detrital quartz. In distal mudstone successions close to 100 % of "apparent" quartz silt can be of that origin, and reworking by bottom currents can further enhance a detrital perception by producing rippled and laminated silt beds. Although understanding how size

  6. DEPENDENCE OF DISTRIBUTION FUNCTION OF COMMERCIAL DAMAGES DUE TO POSSIBLE EARTHQUAKES ON THE CLASS OF SEISMIC RESISTANCE OF A BUILDING

    OpenAIRE

    Hanzada R. Zajnulabidova; Alexander M. Uzdin; Tatiana M. Chirkst

    2017-01-01

    Abstract. Objectives To determine the damage probability of earthquakes of different intensities on the example of a real projected railway station building having a framework design scheme based on the density function of damage distribution. Methods Uncertainty, always existing in nature, invalidates a deterministic approach to the assessment of territorial seismic hazards and, consequently, seismic risk. In this case, seismic risk assessment can be carried out on a probabilistic basis. Thu...

  7. Early-stage evolution of particle size distribution with Johnson's SB function due to Brownian coagulation

    International Nuclear Information System (INIS)

    Tang Hong; Lin Jianzhong

    2013-01-01

    The moment method can be used to determine the time evolution of particle size distribution due to Brownian coagulation based on the general dynamic equation (GDE). But the function form of the initial particle size distribution must be determined beforehand for the moment method. If the assumed function type of the initial particle size distribution has an obvious deviation from the true particle population, the evolution of particle size distribution may be different from the real evolution tendency. Thus, a simple and general method is proposed based on the moment method. In this method, the Johnson's S B function is chosen as a general distribution function to fit the initial distributions including the log normal (L-N), Rosin–Rammler (R-R), normal (N-N) and gamma distribution functions, respectively. Meanwhile, using the modified beta function to fit the L-N, R-R, N-N and gamma functions is also conducted as a comparison in order to present the advantage of the Johnson's S B function as the general distribution function. And then, the time evolution of particle size distributions using the Johnson's S B function as the initial distribution can be obtained by several lower order moment equations of the Johnson's S B function in conjunction with the GDE during the Brownian coagulation process. Simulation experiments indicate that fairly reasonable results of the time evolution of particle size distribution can be obtained with this proposed method in the free molecule regime, transition regime and continuum plus near continuum regime, respectively, at the early time stage of evolution. The Johnson's S B function has the ability of describing the early time evolution of different initial particle size distributions. (paper)

  8. application of ant colony optimisation in distribution transformer sizing

    African Journals Online (AJOL)

    HP

    Keywords: ant colony, optimization, transformer sizing, distribution transformer. 1. INTRODUCTION ... more intensive pheromone and higher probability to be chosen [12]. ..... pp.29-41, 1996. [7] EC global market place, “Technical Parameters”,.

  9. Black swans, power laws, and dragon-kings: Earthquakes, volcanic eruptions, landslides, wildfires, floods, and SOC models

    Science.gov (United States)

    Sachs, M. K.; Yoder, M. R.; Turcotte, D. L.; Rundle, J. B.; Malamud, B. D.

    2012-05-01

    Extreme events that change global society have been characterized as black swans. The frequency-size distributions of many natural phenomena are often well approximated by power-law (fractal) distributions. An important question is whether the probability of extreme events can be estimated by extrapolating the power-law distributions. Events that exceed these extrapolations have been characterized as dragon-kings. In this paper we consider extreme events for earthquakes, volcanic eruptions, wildfires, landslides and floods. We also consider the extreme event behavior of three models that exhibit self-organized criticality (SOC): the slider-block, forest-fire, and sand-pile models. Since extrapolations using power-laws are widely used in probabilistic hazard assessment, the occurrence of dragon-king events have important practical implications.

  10. A model of litter size distribution in cattle.

    Science.gov (United States)

    Bennett, G L; Echternkamp, S E; Gregory, K E

    1998-07-01

    Genetic increases in twinning of cattle could result in increased frequency of triplet or higher-order births. There are no estimates of the incidence of triplets in populations with genetic levels of twinning over 40% because these populations either have not existed or have not been documented. A model of the distribution of litter size in cattle is proposed. Empirical estimates of ovulation rate distribution in sheep were combined with biological hypotheses about the fate of embryos in cattle. Two phases of embryo loss were hypothesized. The first phase is considered to be preimplantation. Losses in this phase occur independently (i.e., the loss of one embryo does not affect the loss of the remaining embryos). The second phase occurs after implantation. The loss of one embryo in this stage results in the loss of all embryos. Fewer than 5% triplet births are predicted when 50% of births are twins and triplets. Above 60% multiple births, increased triplets accounted for most of the increase in litter size. Predictions were compared with data from 5,142 calvings by 14 groups of heifers and cows with average litter sizes ranging from 1.14 to 1.36 calves. The predicted number of triplets was not significantly different (chi2 = 16.85, df = 14) from the observed number. The model also predicted differences in conception rates. A cow ovulating two ova was predicted to have the highest conception rate in a single breeding cycle. As mean ovulation rate increased, predicted conception to one breeding cycle increased. Conception to two or three breeding cycles decreased as mean ovulation increased because late-pregnancy failures increased. An alternative model of the fate of ova in cattle based on embryo and uterine competency predicts very similar proportions of singles, twins, and triplets but different conception rates. The proposed model of litter size distribution in cattle accurately predicts the proportion of triplets found in cattle with genetically high twinning

  11. The typical seismic behavior in the vicinity of a large earthquake

    Science.gov (United States)

    Rodkin, M. V.; Tikhonov, I. N.

    2016-10-01

    The Global Centroid Moment Tensor catalog (GCMT) was used to construct the spatio-temporal generalized vicinity of a large earthquake (GVLE) and to investigate the behavior of seismicity in GVLE. The vicinity is made of earthquakes falling into the zone of influence of a large number (100, 300, or 1000) of largest earthquakes. The GVLE construction aims at enlarging the available statistics, diminishing a strong random component, and revealing typical features of pre- and post-shock seismic activity in more detail. As a result of the GVLE construction, the character of fore- and aftershock cascades was examined in more detail than was possible without of the use of the GVLE approach. As well, several anomalies in the behavior exhibited by a variety of earthquake parameters were identified. The amplitudes of all these anomalies increase with the approaching time of the generalized large earthquake (GLE) as the logarithm of the time interval from the GLE occurrence. Most of the discussed anomalies agree with common features well expected in the evolution of instability. In addition to these common type precursors, one earthquake-specific precursor was found. The decrease in mean earthquake depth presumably occurring in a smaller GVLE probably provides evidence of a deep fluid being involved in the process. The typical features in the evolution of shear instability as revealed in GVLE agree with results obtained in laboratory studies of acoustic emission (AE). The majority of the anomalies in earthquake parameters appear to have a secondary character, largely connected with an increase in mean magnitude and decreasing fraction of moderate size events (mw5.0-6.0) in the immediate GLE vicinity. This deficit of moderate size events could hardly be caused entirely by their incomplete reporting and can presumably reflect some features in the evolution of seismic instability.

  12. Particle size distribution measurements of radionuclides from Chernobyl

    International Nuclear Information System (INIS)

    Georgi, B.; Tschiersch, J.

    1988-01-01

    Characteristics of the size distribution of the Chernobyl aerosol have been measured at four locations along the trajectory of the cloud. Changes in time and differences between 131 I and the other isotopes are explained by aerosol physical processes. The relevance of the measurements for dose calculations are discussed

  13. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    Science.gov (United States)

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions

  14. Determination of the particle size distribution of aerosols by means of a diffusion battery

    International Nuclear Information System (INIS)

    Maigne, J.P.

    1978-09-01

    The different methods allowing to determine the particle size distribution of aerosols by means of diffusion batteries are described. To that purpose, a new method for the processing of experimental data (percentages of particles trapped by the battery vs flow rate) was developed on the basis of calculation principles which are described and assessed. This method was first tested by numerical simulation from a priori particle size distributions and then verified experimentally using a fine uranine aerosol whose particle size distribution as determined by our method was compared with the distribution previously obtained by electron microscopy. The method can be applied to the determination of particle size distribution spectra of fine aerosols produced by 'radiolysis' of atmospheric gaseous impurities. Two other applications concern the detection threshold of the condensation nuclei counter and the 'critical' radii of 'radiolysis' particles [fr

  15. Shortcomings of InSAR for studying megathrust earthquakes: The case of the M w 9.0 Tohoku-Oki earthquake

    KAUST Repository

    Feng, Guangcai

    2012-05-28

    Interferometric Synthetic Aperture Radar (InSAR) observations are sometimes the only geodetic data of large subduction-zone earthquakes. However, these data usually suffer from spatially long-wavelength orbital and atmospheric errors that can be difficult to distinguish from the coseismic deformation and may therefore result in biased fault-slip inversions. To study how well InSAR constrains fault-slip of large subduction zone earthquakes, we use data of the 11 March 2011 Tohoku-Oki earthquake (Mw9.0) and test InSAR-derived fault-slip models against models constrained by GPS data from the extensive nationwide network in Japan. The coseismic deformation field was mapped using InSAR data acquired from multiple ascending and descending passes of the ALOS and Envisat satellites. We then estimated several fault-slip distribution models that were constrained using the InSAR data alone, onland and seafloor GPS/acoustic data, or combinations of the different data sets. Based on comparisons of the slip models, we find that there is no real gain by including InSAR observations for determining the fault slip distribution of this earthquake. That said, however, some of the main fault-slip patterns can be retrieved using the InSAR data alone when estimating long wavelength orbital/atmospheric ramps as a part of the modeling. Our final preferred fault-slip solution of the Tohoku-Oki earthquake is based only on the GPS data and has maximum reverse- and strike-slip of 36.0 m and 6.0 m, respectively, located northeast of the epicenter at a depth of 6 km, and has a total geodetic moment is 3.6 × 1022 Nm (Mw 9.01), similar to seismological estimates.

  16. Relationship between large slip area and static stress drop of aftershocks of inland earthquake :Example of the 2007 Noto Hanto earthquake

    Science.gov (United States)

    Urano, S.; Hiramatsu, Y.; Yamada, T.

    2013-12-01

    The 2007 Noto Hanto earthquake (MJMA 6.9; hereafter referred to the main shock) occurred at 0:41(UTC) on March 25, 2007 at a depth of 11km beneath the west coast of Noto Peninsula, central Japan. The dominant slip of the main shock was on a reverse fault with a right-lateral slip and the large slip area was distributed from hypocenter to the shallow part on the fault plane (Horikawa, 2008). The aftershocks are distributed not only in the small slip area but also in the large slip area (Hiramatsu et al., 2011). In this study, we estimate static stress drops of aftershocks on the fault plane of the main shock. We discuss the relationship between the static stress drops of the aftershocks and the large slip area of the main shock by investigating spatial pattern of the values of the static stress drops. We use the waveform data obtained by the group for the joint aftershock observations of the 2007 Noto Hanto Earthquake (Sakai et al., 2007). The sampling frequency of the waveform data is 100 Hz or 200 Hz. Focusing on similar aftershocks reported by Hiramatsu et al. (2011), we analyze static stress drops by using the method of empirical Green's function (EGF) (Hough, 1997) as follows. The smallest earthquake (MJMA≥2.0) of each group of similar earthquakes is set to the EGF earthquake, and the largest earthquake (MJMA≥2.5) is set to the target earthquake. We then deconvolve the waveform of an interested earthquake with that of the EGF earthquake at each station and obtain the spectral ratio of the sources that cancels the propagation effects (path and site effects). Following the procedure of Yamada et al. (2010), we finally estimate static stress drops for P- and S-waves from corner frequencies of the spectral ratio by using a model of Madariaga (1976). The estimated average value of static stress drop is 8.2×1.3 MPa (8.6×2.2 MPa for P-wave and 7.8×1.3 MPa for S-wave). These values are coincident approximately with the static stress drop of aftershocks of other

  17. Ionospheric precursors to large earthquakes: A case study of the 2011 Japanese Tohoku Earthquake

    Science.gov (United States)

    Carter, B. A.; Kellerman, A. C.; Kane, T. A.; Dyson, P. L.; Norman, R.; Zhang, K.

    2013-09-01

    Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

  18. Iterative inversion of phase-Doppler-anemometry size distributions from sprays of optically inhomogeneous liquids.

    Science.gov (United States)

    Köser, O; Wriedt, T

    1996-05-20

    Using phase Doppler anemometry (PDA) to investigate sprays of optically inhomogeneous liquids leads to blurred measured size distributions. The blurring function is formed by performance of PDA measurements on single-size droplets generated by a piezoelectric droplet generator. To obtain the undistorted droplet-size distributions, a constrained iterative inversion algorithm is applied. The number of iteration steps to achieve the best possible restoration is determined by the use of synthetically generated data that has noise properties similar to the measured histograms. The obtained size distributions are checked by comparison with undistorted measurement results of an atomized optical homogeneous liquid.

  19. Earthquake forecasting test for Kanto district to reduce vulnerability of urban mega earthquake disasters

    Science.gov (United States)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2012-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

  20. Influence of grain size distribution on dynamic shear modulus of sands

    Directory of Open Access Journals (Sweden)

    Dyka Ireneusz

    2017-11-01

    Full Text Available The paper presents the results of laboratory tests, that verify the correlation between the grain-size characteristics of non-cohesive soils and the value of the dynamic shear modulus. The problem is a continuation of the research performed at the Institute of Soil Mechanics and Rock Mechanics in Karlsruhe, by T. Wichtmann and T. Triantafyllidis, who derived the extension of the applicability of the Hardin’s equation describing the explicite dependence between the grain size distribution of sands and the values of dynamic shear modulus. For this purpose, piezo-ceramic bender elements generating elastic waves were used to investigate the mechanical properties of the specimens with artificially generated particle distribution. The obtained results confirmed the hypothesis that grain size distribution of non-cohesive soils has a significant influence on the dynamic shear modulus, but at the same time they have shown that obtaining unambiguous results from bender element tests is a difficult task in practical applications.

  1. Nucleation speed limit on remote fluid induced earthquakes

    Science.gov (United States)

    Parsons, Thomas E.; Akinci, Aybige; Malignini, Luca

    2017-01-01

    Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes.

  2. Size and spatial distribution of micropores in SBA-15 using CM-SANS

    International Nuclear Information System (INIS)

    Pollock, Rachel A.; Walsh, Brenna R.; Fry, Jason A.; Ghampson, Tyrone; Centikol, Ozgul; Melnichenko, Yuri B.; Kaiser, Helmut; Pynn, Roger; Frederick, Brian G.

    2011-01-01

    Diffraction intensity analysis of small-angle neutron scattering measurements of dry SBA-15 have been combined with nonlocal density functional theory (NLDFT) analysis of nitrogen desorption isotherms to characterize the micropore, secondary mesopore, and primary mesopore structure. The radial dependence of the scattering length density, which is sensitive to isolated surface hydroxyls, can only be modeled if the NLDFT pore size distribution is distributed relatively uniformly throughout the silica framework, not localized in a 'corona' around the primary mesopores. Contrast matching-small angle neutron scattering (CM-SANS) measurements, using water, decane, tributylamine, cyclohexane, and isooctane as direct probes of the size of micropores indicate that the smallest pores in SBA-15 have diameter between 5.7 and 6.2 (angstrom). Correlation of the minimum pore size with the onset of the micropore size distribution provides direct evidence that the shape of the smallest micropores is cylinderlike, which is consistent with their being due to unraveling of the polymer template.

  3. Rocking motion of structures under earthquakes. Overturning of 2-DOF system

    International Nuclear Information System (INIS)

    Kobayashi, Koichi; Watanabe, Tetsuya; Tanaka, Kihachiro; Tomoda, Akinori

    2011-01-01

    In recent years, huge earthquakes happen, for example, The South Hyogo prefecture Earthquake in 1995, The Mid Niigata Prefecture Earthquake in 2004, The Iwate-Miyagi Nairiku Earthquake in 2008. In The Niigataken Chuetsu-oki Earthquake in 2007, hundreds of drums fell down and water spilled out. A lot of studies about rocking behavior of rigid body had been performed from 1960's. However, these studies were only for a specific condition of the structure size or input vibration characteristics. Therefore, generalizes fall condition for earthquake is required. This paper deals with the analytical and the experimental study of the rocking vibration of 1-DOF rocking system, 2-DOF vibration-rocking system and 2-DOF rocking system under earthquakes. In this study, the equation of motion for each rocking systems are developed. The numerical model of 2-DOF rocking system is evaluated by free rocking experiment. In this paper, 'Overturning Map' which can distinguish whether structures falls or not is proposed. The overturning map of each rocking systems excited by the artificial earthquake wave calculated from the design spectrum is shown. As the result, overturning condition of structures is clarified. (author)

  4. Why Does Zipf's Law Break Down in Rank-Size Distribution of Cities?

    OpenAIRE

    Kuninaka, Hiroto; Matsushita, Mitsugu

    2008-01-01

    We study rank-size distribution of cities in Japan on the basis of data analysis. From the census data after World War II, we find that the rank-size distribution of cities is composed of two parts, each of which has independent power exponent. In addition, the power exponent of the head part of the distribution changes in time and Zipf's law holds only in a restricted period. We show that Zipf's law broke down due to both of Showa and Heisei great mergers and recovered due to population grow...

  5. Zipf's law and city size distribution: A survey of the literature and future research agenda

    Science.gov (United States)

    Arshad, Sidra; Hu, Shougeng; Ashraf, Badar Nadeem

    2018-02-01

    This study provides a systematic review of the existing literature on Zipf's law for city size distribution. Existing empirical evidence suggests that Zipf's law is not always observable even for the upper-tail cities of a territory. However, the controversy with empirical findings arises due to sample selection biases, methodological weaknesses and data limitations. The hypothesis of Zipf's law is more likely to be rejected for the entire city size distribution and, in such case, alternative distributions have been suggested. On the contrary, the hypothesis is more likely to be accepted if better empirical methods are employed and cities are properly defined. The debate is still far from to be conclusive. In addition, we identify four emerging areas in Zipf's law and city size distribution research including the size distribution of lower-tail cities, the size distribution of cities in sub-national regions, the alternative forms of Zipf's law, and the relationship between Zipf's law and the coherence property of the urban system.

  6. Ionospheric GPS TEC Anomalies and M >= 5.9 Earthquakes in Indonesia during 1993 - 2002

    Directory of Open Access Journals (Sweden)

    Sarmoko Saroso

    2008-01-01

    Full Text Available Indonesia is one of the most seismically active regions in the world, containing numerous active volcanoes and subject to frequent earthquakes with epicenters distributed along the same regions as volcanoes. In this paper, a case study is carried out to investigate pre-earthquake ionospheric anomalies in total electron content (TEC during the Sulawesi earthquakes of 1993 - 2002, and the Sumatra-Andaman earthquake of 26 December 2004, the largest earthquake in the world since 1964. It is found that the ionospheric TECs remarkably decrease within 2 - 7 days before the earthquakes, and for the very powerful Sumatra-Andaman earthquake, the anomalies extend up to about 1600 km from the epicenter.

  7. Computational and experimental study of the cluster size distribution in MAPLE

    International Nuclear Information System (INIS)

    Leveugle, Elodie; Zhigilei, Leonid V.; Sellinger, Aaron; Fitz-Gerald, James M.

    2007-01-01

    A combined experimental and computational study is performed to investigate the origin and characteristics of the surface features observed in SEM images of thin polymer films deposited in matrix-assisted pulsed laser evaporation (MAPLE). Analysis of high-resolution SEM images of surface morphologies of the films deposited at different fluences reveals that the mass distributions of the surface features can be well described by a power-law, Y(N) ∝ N -t , with exponent -t ∼ -1.6. Molecular dynamic simulations of the MAPLE process predict a similar size distribution for large clusters observed in the ablation plume. A weak dependence of the cluster size distributions on fluence and target composition suggests that the power-law cluster size distribution may be a general characteristic of the ablation plume generated as a result of an explosive decomposition of a target region overheated above the limit of its thermodynamic stability. Based on the simulation results, we suggest that the ejection of large matrix-polymer clusters, followed by evaporation of the volatile matrix, is responsible for the formation of the surface features observed in the polymer films deposited in MAPLE experiments

  8. Electron impact ionization of size selected hydrogen clusters (H2)N: ion fragment and neutral size distributions.

    Science.gov (United States)

    Kornilov, Oleg; Toennies, J Peter

    2008-05-21

    Clusters consisting of normal H2 molecules, produced in a free jet expansion, are size selected by diffraction from a transmission nanograting prior to electron impact ionization. For each neutral cluster (H2)(N) (N=2-40), the relative intensities of the ion fragments Hn+ are measured with a mass spectrometer. H3+ is found to be the most abundant fragment up to N=17. With a further increase in N, the abundances of H3+, H5+, H7+, and H9+ first increase and, after passing through a maximum, approach each other. At N=40, they are about the same and more than a factor of 2 and 3 larger than for H11+ and H13+, respectively. For a given neutral cluster size, the intensities of the ion fragments follow a Poisson distribution. The fragmentation probabilities are used to determine the neutral cluster size distribution produced in the expansion at a source temperature of 30.1 K and a source pressure of 1.50 bar. The distribution shows no clear evidence of a magic number N=13 as predicted by theory and found in experiments with pure para-H2 clusters. The ion fragment distributions are also used to extract information on the internal energy distribution of the H3+ ions produced in the reaction H2+ + H2-->H3+ +H, which is initiated upon ionization of the cluster. The internal energy is assumed to be rapidly equilibrated and to determine the number of molecules subsequently evaporated. The internal energy distribution found in this way is in good agreement with data obtained in an earlier independent merged beam scattering experiment.

  9. Prospective testing of Coulomb short-term earthquake forecasts

    Science.gov (United States)

    Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.

    2009-12-01

    Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

  10. Estimation of co-seismic stress change of the 2008 Wenchuan Ms8.0 earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Sun Dongsheng; Wang Hongcai; Ma Yinsheng; Zhou Chunjing [Key laboratory of Neotectonic movement and Geohazard, Ministry of Land and Resources, Beijing 100081 (China) and Institute of Geomechanics, Chinese academy of Geological Sciences, Beijing 100081 (China)

    2012-09-26

    In-situ stress change near the fault before and after a great earthquake is a key issue in the geosciences field. In this work, based on the 2008 Great Wenchuan earthquake fault slip dislocation model, the co-seismic stress tensor change due to the Wenchuan earthquake and the distribution functions around the Longmen Shan fault are given. Our calculated results are almost consistent with the before and after great Wenchuan earthquake in-situ measuring results. The quantitative assessment results provide a reference for the study of the mechanism of earthquakes.

  11. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    Directory of Open Access Journals (Sweden)

    T. Petrut

    2018-01-01

    Full Text Available Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  12. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    Science.gov (United States)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  13. Heterogeneous rupture in the great Cascadia earthquake of 1700 inferred from coastal subsidence estimates

    Science.gov (United States)

    Wang, Pei-Ling; Engelhart, Simon E.; Wang, Kelin; Hawkes, Andrea D.; Horton, Benjamin P.; Nelson, Alan R.; Witter, Robert C.

    2013-01-01

    Past earthquake rupture models used to explain paleoseismic estimates of coastal subsidence during the great A.D. 1700 Cascadia earthquake have assumed a uniform slip distribution along the megathrust. Here we infer heterogeneous slip for the Cascadia margin in A.D. 1700 that is analogous to slip distributions during instrumentally recorded great subduction earthquakes worldwide. The assumption of uniform distribution in previous rupture models was due partly to the large uncertainties of then available paleoseismic data used to constrain the models. In this work, we use more precise estimates of subsidence in 1700 from detailed tidal microfossil studies. We develop a 3-D elastic dislocation model that allows the slip to vary both along strike and in the dip direction. Despite uncertainties in the updip and downdip slip extensions, the more precise subsidence estimates are best explained by a model with along-strike slip heterogeneity, with multiple patches of high-moment release separated by areas of low-moment release. For example, in A.D. 1700, there was very little slip near Alsea Bay, Oregon (~44.4°N), an area that coincides with a segment boundary previously suggested on the basis of gravity anomalies. A probable subducting seamount in this area may be responsible for impeding rupture during great earthquakes. Our results highlight the need for more precise, high-quality estimates of subsidence or uplift during prehistoric earthquakes from the coasts of southern British Columbia, northern Washington (north of 47°N), southernmost Oregon, and northern California (south of 43°N), where slip distributions of prehistoric earthquakes are poorly constrained.

  14. Salient Features of the 2015 Gorkha, Nepal Earthquake in Relation to Earthquake Cycle and Dynamic Rupture Models

    Science.gov (United States)

    Ampuero, J. P.; Meng, L.; Hough, S. E.; Martin, S. S.; Asimaki, D.

    2015-12-01

    Two salient features of the 2015 Gorkha, Nepal, earthquake provide new opportunities to evaluate models of earthquake cycle and dynamic rupture. The Gorkha earthquake broke only partially across the seismogenic depth of the Main Himalayan Thrust: its slip was confined in a narrow depth range near the bottom of the locked zone. As indicated by the belt of background seismicity and decades of geodetic monitoring, this is an area of stress concentration induced by deep fault creep. Previous conceptual models attribute such intermediate-size events to rheological segmentation along-dip, including a fault segment with intermediate rheology in between the stable and unstable slip segments. We will present results from earthquake cycle models that, in contrast, highlight the role of stress loading concentration, rather than frictional segmentation. These models produce "super-cycles" comprising recurrent characteristic events interspersed by deep, smaller non-characteristic events of overall increasing magnitude. Because the non-characteristic events are an intrinsic component of the earthquake super-cycle, the notion of Coulomb triggering or time-advance of the "big one" is ill-defined. The high-frequency (HF) ground motions produced in Kathmandu by the Gorkha earthquake were weaker than expected for such a magnitude and such close distance to the rupture, as attested by strong motion recordings and by macroseismic data. Static slip reached close to Kathmandu but had a long rise time, consistent with control by the along-dip extent of the rupture. Moreover, the HF (1 Hz) radiation sources, imaged by teleseismic back-projection of multiple dense arrays calibrated by aftershock data, was deep and far from Kathmandu. We argue that HF rupture imaging provided a better predictor of shaking intensity than finite source inversion. The deep location of HF radiation can be attributed to rupture over heterogeneous initial stresses left by the background seismic activity

  15. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    Science.gov (United States)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  16. Earthquake cycle modeling of multi-segmented faults: dynamic rupture and ground motion simulation of the 1992 Mw 7.3 Landers earthquake.

    Science.gov (United States)

    Petukhin, A.; Galvez, P.; Somerville, P.; Ampuero, J. P.

    2017-12-01

    We perform earthquake cycle simulations to study the characteristics of source scaling relations and strong ground motions and in multi-segmented fault ruptures. For earthquake cycle modeling, a quasi-dynamic solver (QDYN, Luo et al, 2016) is used to nucleate events and the fully dynamic solver (SPECFEM3D, Galvez et al., 2014, 2016) is used to simulate earthquake ruptures. The Mw 7.3 Landers earthquake has been chosen as a target earthquake to validate our methodology. The SCEC fault geometry for the three-segmented Landers rupture is included and extended at both ends to a total length of 200 km. We followed the 2-D spatial correlated Dc distributions based on Hillers et. al. (2007) that associates Dc distribution with different degrees of fault maturity. The fault maturity is related to the variability of Dc on a microscopic scale. Large variations of Dc represents immature faults and lower variations of Dc represents mature faults. Moreover we impose a taper (a-b) at the fault edges and limit the fault depth to 15 km. Using these settings, earthquake cycle simulations are performed to nucleate seismic events on different sections of the fault, and dynamic rupture modeling is used to propagate the ruptures. The fault segmentation brings complexity into the rupture process. For instance, the change of strike between fault segments enhances strong variations of stress. In fact, Oglesby and Mai (2012) show the normal stress varies from positive (clamping) to negative (unclamping) between fault segments, which leads to favorable or unfavorable conditions for rupture growth. To replicate these complexities and the effect of fault segmentation in the rupture process, we perform earthquake cycles with dynamic rupture modeling and generate events similar to the Mw 7.3 Landers earthquake. We extract the asperities of these events and analyze the scaling relations between rupture area, average slip and combined area of asperities versus moment magnitude. Finally, the

  17. Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering

    Science.gov (United States)

    Geist, Eric L.

    2012-01-01

    Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

  18. The size distributions of fragments ejected at a given velocity from impact craters

    Science.gov (United States)

    O'Keefe, John D.; Ahrens, Thomas J.

    1987-01-01

    The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.

  19. The problem of predicting the size distribution of sediment supplied by hillslopes to rivers

    Science.gov (United States)

    Sklar, Leonard S.; Riebe, Clifford S.; Marshall, Jill A.; Genetti, Jennifer; Leclere, Shirin; Lukens, Claire L.; Merces, Viviane

    2017-01-01

    Sediments link hillslopes to river channels. The size of sediments entering channels is a key control on river morphodynamics across a range of scales, from channel response to human land use to landscape response to changes in tectonic and climatic forcing. However, very little is known about what controls the size distribution of particles eroded from bedrock on hillslopes, and how particle sizes evolve before sediments are delivered to channels. Here we take the first steps toward building a geomorphic transport law to predict the size distribution of particles produced on hillslopes and supplied to channels. We begin by identifying independent variables that can be used to quantify the influence of five key boundary conditions: lithology, climate, life, erosion rate, and topography, which together determine the suite of geomorphic processes that produce and transport sediments on hillslopes. We then consider the physical and chemical mechanisms that determine the initial size distribution of rock fragments supplied to the hillslope weathering system, and the duration and intensity of weathering experienced by particles on their journey from bedrock to the channel. We propose a simple modeling framework with two components. First, the initial rock fragment sizes are set by the distribution of spacing between fractures in unweathered rock, which is influenced by stresses encountered by rock during exhumation and by rock resistance to fracture propagation. That initial size distribution is then transformed by a weathering function that captures the influence of climate and mineralogy on chemical weathering potential, and the influence of erosion rate and soil depth on residence time and the extent of particle size reduction. Model applications illustrate how spatial variation in weathering regime can lead to bimodal size distributions and downstream fining of channel sediment by down-valley fining of hillslope sediment supply, two examples of hillslope control on

  20. Southern San Andreas Fault seismicity is consistent with the Gutenberg-Richter magnitude-frequency distribution

    Science.gov (United States)

    Page, Morgan T.; Felzer, Karen

    2015-01-01

    The magnitudes of any collection of earthquakes nucleating in a region are generally observed to follow the Gutenberg-Richter (G-R) distribution. On some major faults, however, paleoseismic rates are higher than a G-R extrapolation from the modern rate of small earthquakes would predict. This, along with other observations, led to formulation of the characteristic earthquake hypothesis, which holds that the rate of small to moderate earthquakes is permanently low on large faults relative to the large-earthquake rate (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984). We examine the rate difference between recent small to moderate earthquakes on the southern San Andreas fault (SSAF) and the paleoseismic record, hypothesizing that the discrepancy can be explained as a rate change in time rather than a deviation from G-R statistics. We find that with reasonable assumptions, the rate changes necessary to bring the small and large earthquake rates into alignment agree with the size of rate changes seen in epidemic-type aftershock sequence (ETAS) modeling, where aftershock triggering of large earthquakes drives strong fluctuations in the seismicity rates for earthquakes of all magnitudes. The necessary rate changes are also comparable to rate changes observed for other faults worldwide. These results are consistent with paleoseismic observations of temporally clustered bursts of large earthquakes on the SSAF and the absence of M greater than or equal to 7 earthquakes on the SSAF since 1857.

  1. Power law olivine crystal size distributions in lithospheric mantle xenoliths

    Science.gov (United States)

    Armienti, P.; Tarquini, S.

    2002-12-01

    Olivine crystal size distributions (CSDs) have been measured in three suites of spinel- and garnet-bearing harzburgites and lherzolites found as xenoliths in alkaline basalts from Canary Islands, Africa; Victoria Land, Antarctica; and Pali Aike, South America. The xenoliths derive from lithospheric mantle, from depths ranging from 80 to 20 km. Their textures vary from coarse to porphyroclastic and mosaic-porphyroclastic up to cataclastic. Data have been collected by processing digital images acquired optically from standard petrographic thin sections. The acquisition method is based on a high-resolution colour scanner that allows image capturing of a whole thin section. Image processing was performed using the VISILOG 5.2 package, resolving crystals larger than about 150 μm and applying stereological corrections based on the Schwartz-Saltykov algorithm. Taking account of truncation effects due to resolution limits and thin section size, all samples show scale invariance of crystal size distributions over almost three orders of magnitude (0.2-25 mm). Power law relations show fractal dimensions varying between 2.4 and 3.8, a range of values observed for distributions of fragment sizes in a variety of other geological contexts. A fragmentation model can reproduce the fractal dimensions around 2.6, which correspond to well-equilibrated granoblastic textures. Fractal dimensions >3 are typical of porphyroclastic and cataclastic samples. Slight bends in some linear arrays suggest selective tectonic crushing of crystals with size larger than 1 mm. The scale invariance shown by lithospheric mantle xenoliths in a variety of tectonic settings forms distant geographic regions, which indicate that this is a common characteristic of the upper mantle and should be taken into account in rheological models and evaluation of metasomatic models.

  2. Elastic energy release in great earthquakes and eruptions

    Directory of Open Access Journals (Sweden)

    Agust eGudmundsson

    2014-05-01

    Full Text Available The sizes of earthquakes are measured using well-defined, measurable quantities such as seismic moment and released (transformed elastic energy. No similar measures exist for the sizes of volcanic eruptions, making it difficult to compare the energies released in earthquakes and eruptions. Here I provide a new measure of the elastic energy (the potential mechanical energy associated with magma chamber rupture and contraction (shrinkage during an eruption. For earthquakes and eruptions, elastic energy derives from two sources: (1 the strain energy stored in the volcano/fault zone before rupture, and (2 the external applied load (force, pressure, stress, displacement on the volcano/fault zone. From thermodynamic considerations it follows that the elastic energy released or transformed (dU during an eruption is directly proportional to the excess pressure (pe in the magma chamber at the time of rupture multiplied by the volume decrease (-dVc of the chamber, so that . This formula can be used as a basis for a new eruption magnitude scale, based on elastic energy released, which can be related to the moment-magnitude scale for earthquakes. For very large eruptions (>100 km3, the volume of the feeder-dike is negligible, so that the decrease in chamber volume during an eruption corresponds roughly to the associated volume of erupted materials , so that the elastic energy is . Using a typical excess pressures of 5 MPa, it is shown that the largest known eruptions on Earth, such as the explosive La Garita Caldera eruption (27-28 million years ago and largest single (effusive Colombia River basalt lava flows (15-16 million years ago, both of which have estimated volumes of about 5000 km3, released elastic energy of the order of 10EJ. For comparison, the seismic moment of the largest earthquake ever recorded, the M9.5 1960 Chile earthquake, is estimated at 100 ZJ and the associated elastic energy release at 10EJ.

  3. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  4. Soot Particle Size Distribution Functions in a Turbulent Non-Premixed Ethylene-Nitrogen Flame

    KAUST Repository

    Boyette, Wesley

    2017-02-21

    A scanning mobility particle sizer with a nano differential mobility analyzer was used to measure nanoparticle size distribution functions in a turbulent non-premixed flame. The burner utilizes a premixed pilot flame which anchors a C2H4/N2 (35/65) central jet with ReD = 20,000. Nanoparticles in the flame were sampled through a N2-filled tube with a 500- μm orifice. Previous studies have shown that insufficient dilution of the nanoparticles can lead to coagulation in the sampling line and skewed particle size distribution functions. A system of mass flow controllers and valves were used to vary the dilution ratio. Single-stage and two-stage dilution systems were investigated. A parametric study on the effect of the dilution ratio on the observed particle size distribution function indicates that particle coagulation in the sampling line can be eliminated using a two-stage dilution process. Carbonaceous nanoparticle (soot) concentration particle size distribution functions along the flame centerline at multiple heights in the flame are presented. The resulting distributions reveal a pattern of increasing mean particle diameters as the distance from the nozzle along the centerline increases.

  5. Soot Particle Size Distribution Functions in a Turbulent Non-Premixed Ethylene-Nitrogen Flame

    KAUST Repository

    Boyette, Wesley; Chowdhury, Snehaunshu; Roberts, William L.

    2017-01-01

    A scanning mobility particle sizer with a nano differential mobility analyzer was used to measure nanoparticle size distribution functions in a turbulent non-premixed flame. The burner utilizes a premixed pilot flame which anchors a C2H4/N2 (35/65) central jet with ReD = 20,000. Nanoparticles in the flame were sampled through a N2-filled tube with a 500- μm orifice. Previous studies have shown that insufficient dilution of the nanoparticles can lead to coagulation in the sampling line and skewed particle size distribution functions. A system of mass flow controllers and valves were used to vary the dilution ratio. Single-stage and two-stage dilution systems were investigated. A parametric study on the effect of the dilution ratio on the observed particle size distribution function indicates that particle coagulation in the sampling line can be eliminated using a two-stage dilution process. Carbonaceous nanoparticle (soot) concentration particle size distribution functions along the flame centerline at multiple heights in the flame are presented. The resulting distributions reveal a pattern of increasing mean particle diameters as the distance from the nozzle along the centerline increases.

  6. Study of Earthquake Disaster Prediction System of Langfang city Based on GIS

    Science.gov (United States)

    Huang, Meng; Zhang, Dian; Li, Pan; Zhang, YunHui; Zhang, RuoFei

    2017-07-01

    In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

  7. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    Science.gov (United States)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced

  8. Particle size distribution of plutonium contaminated soil

    International Nuclear Information System (INIS)

    Zeng Ke; Wu Wangsuo; Jin Yuren; Shen Maoquan; Han Zhaoyang; Hu Zhiqian; Ma Teqi

    2012-01-01

    Wet classification and γ ray spectroscopy had been applied to study the particle size distribution of Pu in the desert soil of somewhere in Northern China. It was found that nearly 90% of Pu exits in 0.1-10 mm particles. only 10% less in particles under 0.05 mm that still poses notable hazards to biosphere if any resuspension. Providing a decontamination target of 239 Pu <4000 Bq/kg, accident condition. (authors)

  9. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  10. Recent applications for rapid estimation of earthquake shaking and losses with ELER Software

    International Nuclear Information System (INIS)

    Demircioglu, M.B.; Erdik, M.; Kamer, Y.; Sesetyan, K.; Tuzun, C.

    2012-01-01

    A methodology and software package entitled Earthquake Loss Estimation Routine (ELER) was developed for rapid estimation of earthquake shaking and losses throughout the Euro-Mediterranean region. The work was carried out under the Joint Research Activity-3 (JRA3) of the EC FP6 project entitled Network of Research Infrastructures for European Seismology (NERIES). The ELER methodology anticipates: 1) finding of the most likely location of the source of the earthquake using regional seismo-tectonic data base; 2) estimation of the spatial distribution of selected ground motion parameters at engineering bedrock through region specific ground motion prediction models, bias-correcting the ground motion estimations with strong ground motion data, if available; 3) estimation of the spatial distribution of site-corrected ground motion parameters using regional geology database using appropriate amplification models; and 4) estimation of the losses and uncertainties at various orders of sophistication (buildings, casualties). The multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships which are coded into ELER. The present paper provides brief information on the methodology of ELER and provides an example application with the recent major earthquake that hit the Van province in the east of Turkey on 23 October 2011 with moment magnitude (Mw) of 7.2. For this earthquake, Kandilli Observatory and Earthquake Research Institute (KOERI) provided almost real time estimations in terms of building damage and casualty distribution using ELER. (author)

  11. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal

  12. Earthquake precursory events around epicenters and local active faults; the cases of two inland earthquakes in Iran

    Science.gov (United States)

    Valizadeh Alvan, H.; Mansor, S.; Haydari Azad, F.

    2012-12-01

    source and propagation of seismic waves. In many cases, active faults are capable of buildup and sudden release of tectonic stress. Hence, monitoring the active fault systems near epicentral regions of past earthquakes would be a necessity. In this paper, we try to detect possible anomalies in SLHF and AT during two moderate earthquakes of 6 - 6.5 M in Iran and explain the relationships between the seismic activities prior to these earthquake and active faulting in the area. Our analysis shows abnormal SLHF 5~10 days before these earthquakes. Meaningful anomalous concentrations usually occurred in the epicentral area. On the other hand, spatial distributions of these variations were in accordance with the local active faults. It is concluded that the anomalous increase in SLHF shows great potential in providing early warning of a disastrous earthquake, provided that there is a better understanding of the background noise due to the seasonal effects and climatic factors involved. Changes in near surface air temperature along nearby active faults, one or two weeks before the earthquakes, although not as significant as SLHF changes, can be considered as another earthquake indicator.

  13. Analytical Approach for Loss Minimization in Distribution Systems by Optimum Placement and Sizing of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Bakshi Surbhi

    2016-01-01

    Full Text Available Distributed Generation has drawn the attention of industrialists and researchers for quite a time now due to the advantages it brings loads. In addition to cost-effective and environmentally friendly, but also brings higher reliability coefficient power system. The DG unit is placed close to the load, rather than increasing the capacity of main generator. This methodology brings many benefits, but has to address some of the challenges. The main is to find the optimal location and size of DG units between them. The purpose of this paper is distributed generation by adding an additional means to reduce losses on the line. This paper attempts to optimize the technology to solve the problem of optimal location and size through the development of multi-objective particle swarm. The problem has been reduced to a mathematical optimization problem by developing a fitness function considering losses and voltage distribution line. Fitness function by using the optimal value of the size and location of this algorithm was found to be minimized. IEEE-14 bus system is being considered, in order to test the proposed algorithm and the results show improved performance in terms of accuracy and convergence rate.

  14. Optical extinction dependence on wavelength and size distribution of airborne dust

    Science.gov (United States)

    Pangle, Garrett E.; Hook, D. A.; Long, Brandon J. N.; Philbrick, C. R.; Hallen, Hans D.

    2013-05-01

    The optical scattering from laser beams propagating through atmospheric aerosols has been shown to be very useful in describing air pollution aerosol properties. This research explores and extends that capability to particulate matter. The optical properties of Arizona Road Dust (ARD) samples are measured in a chamber that simulates the particle dispersal of dust aerosols in the atmospheric environment. Visible, near infrared, and long wave infrared lasers are used. Optical scattering measurements show the expected dependence of laser wavelength and particle size on the extinction of laser beams. The extinction at long wavelengths demonstrates reduced scattering, but chemical absorption of dust species must be considered. The extinction and depolarization of laser wavelengths interacting with several size cuts of ARD are examined. The measurements include studies of different size distributions, and their evolution over time is recorded by an Aerodynamic Particle Sizer. We analyze the size-dependent extinction and depolarization of ARD. We present a method of predicting extinction for an arbitrary ARD size distribution. These studies provide new insights for understanding the optical propagation of laser beams through airborne particulate matter.

  15. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  16. New Measurements of the Particle Size Distribution of Apollo 11 Lunar Soil 10084

    Science.gov (United States)

    McKay, D.S.; Cooper, B.L.; Riofrio, L.M.

    2009-01-01

    We have initiated a major new program to determine the grain size distribution of nearly all lunar soils collected in the Apollo program. Following the return of Apollo soil and core samples, a number of investigators including our own group performed grain size distribution studies and published the results [1-11]. Nearly all of these studies were done by sieving the samples, usually with a working fluid such as Freon(TradeMark) or water. We have measured the particle size distribution of lunar soil 10084,2005 in water, using a Microtrac(TradeMark) laser diffraction instrument. Details of our own sieving technique and protocol (also used in [11]). are given in [4]. While sieving usually produces accurate and reproducible results, it has disadvantages. It is very labor intensive and requires hours to days to perform properly. Even using automated sieve shaking devices, four or five days may be needed to sieve each sample, although multiple sieve stacks increases productivity. Second, sieving is subject to loss of grains through handling and weighing operations, and these losses are concentrated in the finest grain sizes. Loss from handling becomes a more acute problem when smaller amounts of material are used. While we were able to quantitatively sieve into 6 or 8 size fractions using starting soil masses as low as 50mg, attrition and handling problems limit the practicality of sieving smaller amounts. Third, sieving below 10 or 20microns is not practical because of the problems of grain loss, and smaller grains sticking to coarser grains. Sieving is completely impractical below about 5- 10microns. Consequently, sieving gives no information on the size distribution below approx.10 microns which includes the important submicrometer and nanoparticle size ranges. Finally, sieving creates a limited number of size bins and may therefore miss fine structure of the distribution which would be revealed by other methods that produce many smaller size bins.

  17. Theory of Nanocluster Size Distributions from Ion Beam Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, C.W.; Yi, D.O.; Sharp, I.D.; Shin, S.J.; Liao, C.Y.; Guzman, J.; Ager III, J.W.; Haller, E.E.; Chrzan, D.C.

    2008-06-13

    Ion beam synthesis of nanoclusters is studied via both kinetic Monte Carlo simulations and the self-consistent mean-field solution to a set of coupled rate equations. Both approaches predict the existence of a steady state shape for the cluster size distribution that depends only on a characteristic length determined by the ratio of the effective diffusion coefficient to the ion flux. The average cluster size in the steady state regime is determined by the implanted species/matrix interface energy.

  18. Particle size distribution of iron nanomaterials in biological medium by SR-SAXS method

    International Nuclear Information System (INIS)

    Jing Long; Feng Weiyue; Wang Bing; Wang Meng; Ouyang Hong; Zhao Yuliang; Chai Zhifang; Wang Yun; Wang Huajiang; Zhu Motao; Wu Zhonghua

    2009-01-01

    A better understanding of biological effects of nanomaterials in organisms requests knowledge of the physicochemical properties of nanomaterials in biological systems. Affected by high concentration salts and proteins in biological medium, nanoparticles are much easy to agglomerate,hence the difficulties in characterizing size distribution of the nanomaterials in biological medium.In this work, synchrotron radiation small angle X-ray scattering(SR-SAXS) was used to determine size distributions of Fe, Fe 2 O 3 and Fe 3 O 4 nanoparticles of various concentrations in PBS and DMEM culture medium. The results show that size distributions of the nanomaterials could perfectly analyzed by SR-SAXS. The SR-SAXS data were not affected by the particle content and types of the dispersion medium.It is concluded that SR-SAXS can be used for size measurement of nanomaterials in unstable dispersion systems. (authors)

  19. The seismic cycles of large Romanian earthquake: The physical foundation, and the next large earthquake in Vrancea

    International Nuclear Information System (INIS)

    Purcaru, G.

    2002-01-01

    The occurrence patterns of large/great earthquakes at subduction zone interface and in-slab are complex in the space-time dynamics, and make even long-term forecasts very difficult. For some favourable cases where a predictive (empirical) law was found successful predictions were possible (eg. Aleutians, Kuriles, etc). For the large Romanian events (M > 6.7), occurring in the Vrancea seismic slab below 60 km, Purcaru (1974) first found the law of the occurrence time and magnitude: the law of 'quasicycles' and 'supercycles', for large and largest events (M > 7.25), respectively. The quantitative model of Purcaru with these seismic cycles has three time-bands (periods of large earthquakes)/century, discovered using the earthquake history (1100-1973) (however incomplete) of large Vrancea earthquakes for which M was initially estimated (Purcaru, 1974, 1979). Our long-term prediction model is essentially quasideterministic, it predicts uniquely the time and magnitude; since is not strict deterministic the forecasting is interval valued. It predicted the next large earthquake in 1980 in the 3rd time-band (1970-1990), and which occurred in 1977 (M7.1, M w 7.5). The prediction was successful, in long-term sense. We discuss the unpredicted events in 1986 and 1990. Since the laws are phenomenological, we give their physical foundation based on the large scale of rupture zone (RZ) and subscale of the rupture process (RP). First results show that: (1) the 1940 event (h=122 km) ruptured the lower part of the oceanic slab entirely along strike, and down dip, and similarly for 1977 but its upper part, (2) the RZ of 1977 and 1990 events overlap and the first asperity of 1977 event was rebroken in 1990. This shows the size of the events strongly depends on RZ, asperity size/strength and, thus on the failure stress level (FSL), but not on depth, (3) when FSL of high strength (HS) larger zones is critical largest events (eg. 1802, 1940) occur, thus explaining the supercyles (the 1940

  20. Particle size distribution of UO sub 2 aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, B. (Radiation Safety Systems Div., BARC, Bombay (India)); Ramachandran, R.; Majumdar, S. (Radiometallurgy Div., BARC, Bombay (India))

    1991-12-01

    The Anderson cascade impactor has been used to determine the activity mean aerodynamic diameter and the particle size distribution of UO{sub 2} powders dispersed in the form of stable aerosols in an air medium. The UO{sub 2} powders obtained by the calcination of ammonium uranyl carbonate (AUC) and ammonium diuranate (ADU) precipitates have been used. (orig./MM).

  1. Global variations of large megathrust earthquake rupture characteristics

    Science.gov (United States)

    Kanamori, Hiroo

    2018-01-01

    Despite the surge of great earthquakes along subduction zones over the last decade and advances in observations and analysis techniques, it remains unclear whether earthquake complexity is primarily controlled by persistent fault properties or by dynamics of the failure process. We introduce the radiated energy enhancement factor (REEF), given by the ratio of an event’s directly measured radiated energy to the calculated minimum radiated energy for a source with the same seismic moment and duration, to quantify the rupture complexity. The REEF measurements for 119 large [moment magnitude (Mw) 7.0 to 9.2] megathrust earthquakes distributed globally show marked systematic regional patterns, suggesting that the rupture complexity is strongly influenced by persistent geological factors. We characterize this as the existence of smooth and rough rupture patches with varying interpatch separation, along with failure dynamics producing triggering interactions that augment the regional influences on large events. We present an improved asperity scenario incorporating both effects and categorize global subduction zones and great earthquakes based on their REEF values and slip patterns. Giant earthquakes rupturing over several hundred kilometers can occur in regions with low-REEF patches and small interpatch spacing, such as for the 1960 Chile, 1964 Alaska, and 2011 Tohoku earthquakes, or in regions with high-REEF patches and large interpatch spacing as in the case for the 2004 Sumatra and 1906 Ecuador-Colombia earthquakes. Thus, combining seismic magnitude Mw and REEF, we provide a quantitative framework to better represent the span of rupture characteristics of great earthquakes and to understand global seismicity. PMID:29750186

  2. Sendai-Okura earthquake swarm induced by the 2011 Tohoku-Oki earthquake in the stress shadow of NE Japan: Detailed fault structure and hypocenter migration

    Science.gov (United States)

    Yoshida, Keisuke; Hasegawa, Akira

    2018-05-01

    We investigated the distribution and migration of hypocenters of an earthquake swarm that occurred in Sendai-Okura (NE Japan) 15 days after the 2011 M9.0 Tohoku-Oki earthquake, despite the decrease in shear stress due to the static stress change. Hypocenters of 2476 events listed in the JMA catalogue were relocated based on the JMA unified catalogue data in conjunction with data obtained by waveform cross correlation. Hypocenter relocation was successful in delineating several thin planar structures, although the original hypocenters presented a cloud-like distribution. The hypocenters of this swarm event migrated along several planes from deeper to shallower levels rather than diffusing three-dimensionally. One of the nodal planes of the focal mechanisms was nearly parallel to the planar structure of the hypocenters, supporting the idea that each earthquake occurred by causing slip on parts of the same plane. The overall migration velocity of the hypocenters could be explained by the fluid diffusion model with a typical value of hydraulic diffusivity (0.15 m2/s); however, the occurrence of some burst-like activity with much higher migration velocity suggests the possibility that aseismic slip also contributed to triggering the earthquakes. We suggest that the 2011 Sendai-Okura earthquake swarm was generated as follows. (1) The 2011 Tohoku-Oki earthquake caused WNW-ESE extension in the focal region of the swarm, which accordingly reduced shear stress on the fault planes. However, the WNW-ESE extension allowed fluids to move upward from the S-wave reflectors in the mid-crust immediately beneath the focal region. (2) The fluids rising from the mid-crust intruded into several existing planes, which reduced their frictional strengths and caused the observed earthquake swarm. (3) The fluids, and accordingly, the hypocenters of the triggered earthquakes, migrated upward along the fault planes. It is possible that the fluids also triggered aseismic slip, which caused

  3. The measurement of activity-weighted size distributions of radon progeny: methods and laboratory intercomparison studies

    International Nuclear Information System (INIS)

    Hopke, P.K.; Strydom, R.; Ramamurthi, M.; Knutson, E.O.; Tu, K.W.; Scofield, P.; Holub, R.F.; Cheng, Y.S.; Su, Y.F.; Winklmayr, W.

    1992-01-01

    Over the past 5 y, there have been significant improvements in measurement of activity-weighted size distributions of airborne radon decay products. The modification of screen diffusion batteries to incorporate multiple screens of differing mesh number, called graded screen arrays, have permitted improved size resolution below 10 nm such that the size distributions can now be determined down to molecular sized activities (0.5 nm). In order to ascertain the utility and reliability of such systems, several intercomparison tests have been performed in a 2.4 m3 radon chamber in which particles of varying size have been produced by introducing SO2 and H2O along with the radon to the chamber. In April 1988, intercomparison studies were performed between direct measurements of the activity-weighted size distributions as measured by graded screen arrays and an indirect measurement of the distribution obtained by measuring the number size distribution with a differential mobility analyzer and multiplying by the theoretical attachment rate. Good agreement was obtained in these measurements. A second set of intercomparison studies among a number of groups with graded screen array systems was made in April 1989 with the objective of resolving spectral structure below 10 nm. Again, generally good agreement among the various groups was obtained although some differences were noted. It is thus concluded that such systems can be constructed and can be useful in making routine measurements of activity-weighted size distributions with reasonable confidence in the results obtained

  4. The Lushan earthquake and the giant panda: impacts and conservation.

    Science.gov (United States)

    Zhang, Zejun; Yuan, Shibin; Qi, Dunwu; Zhang, Mingchun

    2014-06-01

    Earthquakes not only result in a great loss of human life and property, but also have profound effects on the Earth's biodiversity. The Lushan earthquake occurred on 20 Apr 2013, with a magnitude of 7.0 and an intensity of 9.0 degrees. A distance of 17.0 km from its epicenter to the nearest distribution site of giant pandas recorded in the Third National Survey was determined. Making use of research on the Wenchuan earthquake (with a magnitude of 8.0), which occurred approximately 5 years ago, we briefly analyze the impacts of the Lushan earthquake on giant pandas and their habitat. An earthquake may interrupt ongoing behaviors of giant pandas and may also cause injury or death. In addition, an earthquake can damage conservation facilities for pandas, and result in further habitat fragmentation and degradation. However, from a historical point of view, the impacts of human activities on giant pandas and their habitat may, in fact, far outweigh those of natural disasters such as earthquakes. Measures taken to promote habitat restoration and conservation network reconstruction in earthquake-affected areas should be based on requirements of giant pandas, not those of humans. © 2013 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  5. Statistical validation of earthquake related observations

    Science.gov (United States)

    Kossobokov, V. G.

    2011-12-01

    The confirmed fractal nature of earthquakes and their distribution in space and time implies that many traditional estimations of seismic hazard (from term-less to short-term ones) are usually based on erroneous assumptions of easy tractable or, conversely, delicately-designed models. The widespread practice of deceptive modeling considered as a "reasonable proxy" of the natural seismic process leads to seismic hazard assessment of unknown quality, which errors propagate non-linearly into inflicted estimates of risk and, eventually, into unexpected societal losses of unacceptable level. The studies aimed at forecast/prediction of earthquakes must include validation in the retro- (at least) and, eventually, in prospective tests. In the absence of such control a suggested "precursor/signal" remains a "candidate", which link to target seismic event is a model assumption. Predicting in advance is the only decisive test of forecast/predictions and, therefore, the score-card of any "established precursor/signal" represented by the empirical probabilities of alarms and failures-to-predict achieved in prospective testing must prove statistical significance rejecting the null-hypothesis of random coincidental occurrence in advance target earthquakes. We reiterate suggesting so-called "Seismic Roulette" null-hypothesis as the most adequate undisturbed random alternative accounting for the empirical spatial distribution of earthquakes: (i) Consider a roulette wheel with as many sectors as the number of earthquake locations from a sample catalog representing seismic locus, a sector per each location and (ii) make your bet according to prediction (i.e., determine, which locations are inside area of alarm, and put one chip in each of the corresponding sectors); (iii) Nature turns the wheel; (iv) accumulate statistics of wins and losses along with the number of chips spent. If a precursor in charge of prediction exposes an imperfection of Seismic Roulette then, having in mind

  6. Simulation and analysis of the soot particle size distribution in a turbulent nonpremixed flame

    KAUST Repository

    Lucchesi, Marco

    2017-02-05

    A modeling framework based on Direct Simulation Monte Carlo (DSMC) is employed to simulate the evolution of the soot particle size distribution in turbulent sooting flames. The stochastic reactor describes the evolution of soot in fluid parcels following Lagrangian trajectories in a turbulent flow field. The trajectories are sampled from a Direct Numerical Simulation (DNS) of a n-heptane turbulent nonpremixed flame. The DSMC method is validated against experimentally measured size distributions in laminar premixed flames and found to reproduce quantitatively the experimental results, including the appearance of the second mode at large aggregate sizes and the presence of a trough at mobility diameters in the range 3–8 nm. The model is then applied to the simulation of soot formation and growth in simplified configurations featuring a constant concentration of soot precursors and the evolution of the size distribution in time is found to depend on the intensity of the nucleation rate. Higher nucleation rates lead to a higher peak in number density and to the size distribution attaining its second mode sooner. The ensemble-averaged PSDF in the turbulent flame is computed from individual samples of the PSDF from large sets of Lagrangian trajectories. This statistical measure is equivalent to time-averaged, scanning mobility particle size (SMPS) measurements in turbulent flames. Although individual trajectories display strong bimodality as in laminar flames, the ensemble-average PSDF possesses only one mode and a long, broad tail, which implies significant polydispersity induced by turbulence. Our results agree very well with SMPS measurements available in the literature. Conditioning on key features of the trajectory, such as mixture fraction or radial locations does not reduce the scatter in the size distributions and the ensemble-averaged PSDF remains broad. The results highlight and explain the important role of turbulence in broadening the size distribution of

  7. a Collaborative Cyberinfrastructure for Earthquake Seismology

    Science.gov (United States)

    Bossu, R.; Roussel, F.; Mazet-Roux, G.; Lefebvre, S.; Steed, R.

    2013-12-01

    One of the challenges in real time seismology is the prediction of earthquake's impact. It is particularly true for moderate earthquake (around magnitude 6) located close to urbanised areas, where the slightest uncertainty in event location, depth, magnitude estimates, and/or misevaluation of propagation characteristics, site effects and buildings vulnerability can dramatically change impact scenario. The Euro-Med Seismological Centre (EMSC) has developed a cyberinfrastructure to collect observations from eyewitnesses in order to provide in-situ constraints on actual damages. This cyberinfrastructure takes benefit of the natural convergence of earthquake's eyewitnesses on EMSC website (www.emsc-csem.org), the second global earthquake information website within tens of seconds of the occurrence of a felt event. It includes classical crowdsourcing tools such as online questionnaires available in 39 languages, and tools to collect geolocated pics. It also comprises information derived from the real time analysis of the traffic on EMSC website, a method named flashsourcing; In case of a felt earthquake, eyewitnesses reach EMSC website within tens of seconds to find out the cause of the shaking they have just been through. By analysing their geographical origin through their IP address, we automatically detect felt earthquakes and in some cases map the damaged areas through the loss of Internet visitors. We recently implemented a Quake Catcher Network (QCN) server in collaboration with Stanford University and the USGS, to collect ground motion records performed by volunteers and are also involved in a project to detect earthquakes from ground motions sensors from smartphones. Strategies have been developed for several social media (Facebook, Twitter...) not only to distribute earthquake information, but also to engage with the Citizens and optimise data collection. A smartphone application is currently under development. We will present an overview of this

  8. Fracture and earthquake physics in a non extensive view

    Science.gov (United States)

    Vallianatos, F.

    2009-04-01

    It is well known that the Gutenberg-Richter (G-R) power law distribution has to be modified for large seismic moments because of energy conservation and geometrical reasons. Several models have been proposed, either in terms of a second power law with a larger b value beyond a crossover magnitude, or based on a magnidute cut-off using an exponential taper. In the present work we point out that the non extensivity viewpoint is applicable to seismic processes. In the frame of a non-extensive approach which is based on Tsallis entropy we construct a generalized expression of Gutenberg-Richter (GGR) law. The existence of lower or/and upper bound to magnitude is discussed and the conditions under which GGR lead to classical GR law are analysed. For the lowest earthquake size (i.e., energy level) the correlation between the different parts of elements involved in the evolution of an earthquake are short-ranged and GR can be deduced on the basis of the maximum entropy principle using BG statistics. As the size (i.e., energy) increases, long range correlation becomes much more important, implying the necessity of using Tsallis entropy as an appropriate generalization of BG entropy. The power law behaviour is derived as a special case, leading to b-values being functions of the non-extensivity parameter q. Furthermore a theoretical analysis of similarities presented in stress stimulated electric and acoustic emissions and earthquakes are discussed not only in the frame of GGR but taking into account a universality in the description of intrevent times distribution. Its particular form can be well expressed in the frame of a non extensive approach. This formulation is very different from an exponential distribution expected for simple random Poisson processes and indicates the existence of a nontrivial universal mechanism in the generation process. All the aforementioned similarities within stress stimulated electrical and acoustic emissions and seismicity suggests a

  9. Earthquake Culture: A Significant Element in Earthquake Disaster Risk Assessment and Earthquake Disaster Risk Management

    OpenAIRE

    Ibrion, Mihaela

    2018-01-01

    This book chapter brings to attention the dramatic impact of large earthquake disasters on local communities and society and highlights the necessity of building and enhancing the earthquake culture. Iran was considered as a research case study and fifteen large earthquake disasters in Iran were investigated and analyzed over more than a century-time period. It was found that the earthquake culture in Iran was and is still conditioned by many factors or parameters which are not integrated and...

  10. Habitat structure and body size distributions: Cross-ecosystem comparison for taxa with determinate and indeterminate growth

    Science.gov (United States)

    Nash, Kirsty L.; Allen, Craig R.; Barichievy, Chris; Nystrom, Magnus; Sundstrom, Shana M.; Graham, Nicholas A.J.

    2014-01-01

    Habitat structure across multiple spatial and temporal scales has been proposed as a key driver of body size distributions for associated communities. Thus, understanding the relationship between habitat and body size is fundamental to developing predictions regarding the influence of habitat change on animal communities. Much of the work assessing the relationship between habitat structure and body size distributions has focused on terrestrial taxa with determinate growth, and has primarily analysed discontinuities (gaps) in the distribution of species mean sizes (species size relationships or SSRs). The suitability of this approach for taxa with indeterminate growth has yet to be determined. We provide a cross-ecosystem comparison of bird (determinate growth) and fish (indeterminate growth) body mass distributions using four independent data sets. We evaluate three size distribution indices: SSRs, species size–density relationships (SSDRs) and individual size–density relationships (ISDRs), and two types of analysis: looking for either discontinuities or abundance patterns and multi-modality in the distributions. To assess the respective suitability of these three indices and two analytical approaches for understanding habitat–size relationships in different ecosystems, we compare their ability to differentiate bird or fish communities found within contrasting habitat conditions. All three indices of body size distribution are useful for examining the relationship between cross-scale patterns of habitat structure and size for species with determinate growth, such as birds. In contrast, for species with indeterminate growth such as fish, the relationship between habitat structure and body size may be masked when using mean summary metrics, and thus individual-level data (ISDRs) are more useful. Furthermore, ISDRs, which have traditionally been used to study aquatic systems, present a potentially useful common currency for comparing body size distributions

  11. Determination of size and shape distributions of metal and ceramic powders

    International Nuclear Information System (INIS)

    Jovanovic, DI.

    1961-01-01

    For testing the size and shape distributions of metal and ceramic uranium oxide powders the following method for analysing the grain size of powders were developed and implemented: microscopic analysis and sedimentation method. A gravimetry absorption device was constructed for determining the specific surfaces of powders

  12. Seismic hazard in Hawaii: High rate of large earthquakes and probabilistics ground-motion maps

    Science.gov (United States)

    Klein, F.W.; Frankel, A.D.; Mueller, C.S.; Wesson, R.L.; Okubo, P.G.

    2001-01-01

    The seismic hazard and earthquake occurrence rates in Hawaii are locally as high as that near the most hazardous faults elsewhere in the United States. We have generated maps of peak ground acceleration (PGA) and spectral acceleration (SA) (at 0.2, 0.3 and 1.0 sec, 5% critical damping) at 2% and 10% exceedance probabilities in 50 years. The highest hazard is on the south side of Hawaii Island, as indicated by the MI 7.0, MS 7.2, and MI 7.9 earthquakes, which occurred there since 1868. Probabilistic values of horizontal PGA (2% in 50 years) on Hawaii's south coast exceed 1.75g. Because some large earthquake aftershock zones and the geometry of flank blocks slipping on subhorizontal decollement faults are known, we use a combination of spatially uniform sources in active flank blocks and smoothed seismicity in other areas to model seismicity. Rates of earthquakes are derived from magnitude distributions of the modem (1959-1997) catalog of the Hawaiian Volcano Observatory's seismic network supplemented by the historic (1868-1959) catalog. Modern magnitudes are ML measured on a Wood-Anderson seismograph or MS. Historic magnitudes may add ML measured on a Milne-Shaw or Bosch-Omori seismograph or MI derived from calibrated areas of MM intensities. Active flank areas, which by far account for the highest hazard, are characterized by distributions with b slopes of about 1.0 below M 5.0 and about 0.6 above M 5.0. The kinked distribution means that large earthquake rates would be grossly under-estimated by extrapolating small earthquake rates, and that longer catalogs are essential for estimating or verifying the rates of large earthquakes. Flank earthquakes thus follow a semicharacteristic model, which is a combination of background seismicity and an excess number of large earthquakes. Flank earthquakes are geometrically confined to rupture zones on the volcano flanks by barriers such as rift zones and the seaward edge of the volcano, which may be expressed by a magnitude

  13. The origin of high frequency radiation in earthquakes and the geometry of faulting

    Science.gov (United States)

    Madariaga, R.

    2004-12-01

    In a seminal paper of 1967 Kei Aki discovered the scaling law of earthquake spectra and showed that, among other things, the high frequency decay was of type omega-squared. This implies that high frequency displacement amplitudes are proportional to a characteristic length of the fault, and radiated energy scales with the cube of the fault dimension, just like seismic moment. Later in the seventies, it was found out that a simple explanation for this frequency dependence of spectra was that high frequencies were generated by stopping phases, waves emitted by changes in speed of the rupture front as it propagates along the fault, but this did not explain the scaling of high frequency waves with fault length. Earthquake energy balance is such that, ignoring attenuation, radiated energy is the change in strain energy minus energy released for overcoming friction. Until recently the latter was considered to be a material property that did not scale with fault size. Yet, in another classical paper Aki and Das estimated in the late 70s that energy release rate also scaled with earthquake size, because earthquakes were often stopped by barriers or changed rupture speed at them. This observation was independently confirmed in the late 90s by Ide and Takeo and Olsen et al who found that energy release rates for Kobe and Landers were in the order of a MJ/m2, implying that Gc necessarily scales with earthquake size, because if this was a material property, small earthquakes would never occur. Using both simple analytical and numerical models developed by Addia-Bedia and Aochi and Madariaga, we examine the consequence of these observations for the scaling of high frequency waves with fault size. We demonstrate using some classical results by Kostrov, Husseiny and Freund that high frequency energy flow measures energy release rate and is generated when ruptures change velocity (both direction and speed) at fault kinks or jogs. Our results explain why super shear ruptures are

  14. Why are earthquakes nudging the pole towards 140°E?

    Science.gov (United States)

    Spada, Giorgio

    Earthquakes have collectively the tendency to displace the pole of rotation of the earth towards a preferred direction (∼140°E). This trend, which is still unexplained on quantitative grounds, has been revealed by computations of earthquake-induced inertia variations on both a secular and a decade time-scale. Purpose of this letter is to show that the above trend results from the combined effects of the geographical distribution of hypocenters and of the prevailing dip-slip nature of large earthquakes in this century. Our findings are based on the static dislocation theory and on simple geometrical arguments.

  15. Preliminary Study on Earthquake Surface Rupture Extraction from Uav Images

    Science.gov (United States)

    Yuan, X.; Wang, X.; Ding, X.; Wu, X.; Dou, A.; Wang, S.

    2018-04-01

    Because of the advantages of low-cost, lightweight and photography under the cloud, UAVs have been widely used in the field of seismic geomorphology research in recent years. Earthquake surface rupture is a typical seismic tectonic geomorphology that reflects the dynamic and kinematic characteristics of crustal movement. The quick identification of earthquake surface rupture is of great significance for understanding the mechanism of earthquake occurrence, disasters distribution and scale. Using integrated differential UAV platform, series images were acquired with accuracy POS around the former urban area (Qushan town) of Beichuan County as the area stricken seriously by the 2008 Wenchuan Ms8.0 earthquake. Based on the multi-view 3D reconstruction technique, the high resolution DSM and DOM are obtained from differential UAV images. Through the shade-relief map and aspect map derived from DSM, the earthquake surface rupture is extracted and analyzed. The results show that the surface rupture can still be identified by using the UAV images although the time of earthquake elapse is longer, whose middle segment is characterized by vertical movement caused by compression deformation from fault planes.

  16. Location and Size Planning of Distributed Photovoltaic Generation in Distribution network System Based on K-means Clustering Analysis

    Science.gov (United States)

    Lu, Siqi; Wang, Xiaorong; Wu, Junyong

    2018-01-01

    The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.

  17. Long-term earthquake forecasts based on the epidemic-type aftershock sequence (ETAS model for short-term clustering

    Directory of Open Access Journals (Sweden)

    Jiancang Zhuang

    2012-07-01

    Full Text Available Based on the ETAS (epidemic-type aftershock sequence model, which is used for describing the features of short-term clustering of earthquake occurrence, this paper presents some theories and techniques related to evaluating the probability distribution of the maximum magnitude in a given space-time window, where the Gutenberg-Richter law for earthquake magnitude distribution cannot be directly applied. It is seen that the distribution of the maximum magnitude in a given space-time volume is determined in the longterm by the background seismicity rate and the magnitude distribution of the largest events in each earthquake cluster. The techniques introduced were applied to the seismicity in the Japan region in the period from 1926 to 2009. It was found that the regions most likely to have big earthquakes are along the Tohoku (northeastern Japan Arc and the Kuril Arc, both with much higher probabilities than the offshore Nankai and Tokai regions.

  18. Living with earthquakes - development and usage of earthquake-resistant construction methods in European and Asian Antiquity

    Science.gov (United States)

    Kázmér, Miklós; Major, Balázs; Hariyadi, Agus; Pramumijoyo, Subagyo; Ditto Haryana, Yohanes

    2010-05-01

    Earthquakes are among the most horrible events of nature due to unexpected occurrence, for which no spiritual means are available for protection. The only way of preserving life and property is applying earthquake-resistant construction methods. Ancient Greek architects of public buildings applied steel clamps embedded in lead casing to hold together columns and masonry walls during frequent earthquakes in the Aegean region. Elastic steel provided strength, while plastic lead casing absorbed minor shifts of blocks without fracturing rigid stone. Romans invented concrete and built all sizes of buildings as a single, unflexible unit. Masonry surrounding and decorating concrete core of the wall did not bear load. Concrete resisted minor shaking, yielding only to forces higher than fracture limits. Roman building traditions survived the Dark Ages and 12th century Crusader castles erected in earthquake-prone Syria survive until today in reasonably good condition. Concrete and steel clamping persisted side-by-side in the Roman Empire. Concrete was used for cheap construction as compared to building of masonry. Applying lead-encased steel increased costs, and was avoided whenever possible. Columns of the various forums in Italian Pompeii mostly lack steel fittings despite situated in well-known earthquake-prone area. Whether frequent recurrence of earthquakes in the Naples region was known to inhabitants of Pompeii might be a matter of debate. Seemingly the shock of the AD 62 earthquake was not enough to apply well-known protective engineering methods throughout the reconstruction of the city before the AD 79 volcanic catastrophe. An independent engineering tradition developed on the island of Java (Indonesia). The mortar-less construction technique of 8-9th century Hindu masonry shrines around Yogyakarta would allow scattering of blocks during earthquakes. To prevent dilapidation an intricate mortise-and-tenon system was carved into adjacent faces of blocks. Only the

  19. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution

    Science.gov (United States)

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.

    2004-02-01

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  20. Aseismic blocks and destructive earthquakes in the Aegean

    Science.gov (United States)

    Stiros, Stathis

    2017-04-01

    Aseismic areas are not identified only in vast, geologically stable regions, but also within regions of active, intense, distributed deformation such as the Aegean. In the latter, "aseismic blocks" about 200m wide were recognized in the 1990's on the basis of the absence of instrumentally-derived earthquake foci, in contrast to surrounding areas. This pattern was supported by the available historical seismicity data, as well as by geologic evidence. Interestingly, GPS evidence indicates that such blocks are among the areas characterized by small deformation rates relatively to surrounding areas of higher deformation. Still, the largest and most destructive earthquake of the 1990's, the 1995 M6.6 earthquake occurred at the center of one of these "aseismic" zones at the northern part of Greece, found unprotected against seismic hazard. This case was indeed a repeat of the case of the tsunami-associated 1956 Amorgos Island M7.4 earthquake, the largest 20th century event in the Aegean back-arc region: the 1956 earthquake occurred at the center of a geologically distinct region (Cyclades Massif in Central Aegean), till then assumed aseismic. Interestingly, after 1956, the overall idea of aseismic regions remained valid, though a "promontory" of earthquake prone-areas intruding into the aseismic central Aegean was assumed. Exploitation of the archaeological excavation evidence and careful, combined analysis of historical and archaeological data and other palaeoseismic, mostly coastal data, indicated that destructive and major earthquakes have left their traces in previously assumed aseismic blocks. In the latter earthquakes typically occur with relatively low recurrence intervals, >200-300 years, much smaller than in adjacent active areas. Interestingly, areas assumed a-seismic in antiquity are among the most active in the last centuries, while areas hit by major earthquakes in the past are usually classified as areas of low seismic risk in official maps. Some reasons

  1. UCERF3: A new earthquake forecast for California's complex fault system

    Science.gov (United States)

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  2. A quick earthquake disaster loss assessment method supported by dasymetric data for emergency response in China

    Science.gov (United States)

    Xu, Jinghai; An, Jiwen; Nie, Gaozong

    2016-04-01

    Improving earthquake disaster loss estimation speed and accuracy is one of the key factors in effective earthquake response and rescue. The presentation of exposure data by applying a dasymetric map approach has good potential for addressing this issue. With the support of 30'' × 30'' areal exposure data (population and building data in China), this paper presents a new earthquake disaster loss estimation method for emergency response situations. This method has two phases: a pre-earthquake phase and a co-earthquake phase. In the pre-earthquake phase, we pre-calculate the earthquake loss related to different seismic intensities and store them in a 30'' × 30'' grid format, which has several stages: determining the earthquake loss calculation factor, gridding damage probability matrices, calculating building damage and calculating human losses. Then, in the co-earthquake phase, there are two stages of estimating loss: generating a theoretical isoseismal map to depict the spatial distribution of the seismic intensity field; then, using the seismic intensity field to extract statistics of losses from the pre-calculated estimation data. Thus, the final loss estimation results are obtained. The method is validated by four actual earthquakes that occurred in China. The method not only significantly improves the speed and accuracy of loss estimation but also provides the spatial distribution of the losses, which will be effective in aiding earthquake emergency response and rescue. Additionally, related pre-calculated earthquake loss estimation data in China could serve to provide disaster risk analysis before earthquakes occur. Currently, the pre-calculated loss estimation data and the two-phase estimation method are used by the China Earthquake Administration.

  3. Earthquakes and Earthquake Engineering. LC Science Tracer Bullet.

    Science.gov (United States)

    Buydos, John F., Comp.

    An earthquake is a shaking of the ground resulting from a disturbance in the earth's interior. Seismology is the (1) study of earthquakes; (2) origin, propagation, and energy of seismic phenomena; (3) prediction of these phenomena; and (4) investigation of the structure of the earth. Earthquake engineering or engineering seismology includes the…

  4. A Method for Estimation of Death Tolls in Disastrous Earthquake

    Science.gov (United States)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  5. Seasonal variations in size distribution, water-soluble ions, and carbon content of size-segregated aerosols over New Delhi.

    Science.gov (United States)

    Kumar, Pawan; Kumar, Sushil; Yadav, Sudesh

    2018-02-01

    Size distribution, water-soluble inorganic ions (WSII), and organic carbon (OC) and elemental carbon (EC) in size-segregated aerosols were investigated during a year-long sampling in 2010 over New Delhi. Among different size fractions of PM 10 , PM 0.95 was the dominant fraction (45%) followed by PM 3-7.2 (20%), PM 7.2-10 (15%), PM 0.95-1.5 (10%), and PM 1.5-3 (10%). All size fractions exceeded the ambient air quality standards of India for PM 2.5 . Annual average mass size distributions of ions were specific to size and ion(s); Ca 2+ , Mg 2+ , K + , NO 3 - , and Cl - followed bimodal distribution while SO 4 2- and NH 4 + ions showed one mode in PM 0.95 . The concentrations of secondary WSII (NO 3 - , SO 4 2- , and NH 4 + ) increased in winters due to closed and moist atmosphere whereas open atmospheric conditions in summers lead to dispersal of pollutants. NH 4 + and Ca 2+ were dominant neutralization ions but in different size fractions. The summer-time dust transport from upwind region by S SW winds resulted in significantly high concentrations of PM 0.95 and PM 3-7.2 and PM 7.2-10 . This indicted influence of dust generation in Thar Desert and its transport is size selective in nature in downwind direction. The mixing of different sources (geogenic, coal combustions, biomass burning, plastic burning, incinerators, and vehicular emissions sources) for soluble ions in different size fractions was noticed in principle component analysis. Total carbon (TC = EC + OC) constituted 8-31% of the total PM 0.95 mass, and OC dominated over EC. Among EC, char (EC1) dominated over soot (EC2 + EC3). High SOC contribution (82%) to OC and OC/EC ratio of 2.7 suggested possible role of mineral dust and high photochemical activity in SOC production. Mass concentrations of aerosols and WSII and their contributions to each size fraction of PM 10 are governed by nature of sources, emission strength of source(s), and seasonality in meteorological parameters.

  6. Dual megathrust slip behaviors of the 2014 Iquique earthquake sequence

    Science.gov (United States)

    Meng, Lingsen; Huang, Hui; Bürgmann, Roland; Ampuero, Jean Paul; Strader, Anne

    2015-02-01

    The transition between seismic rupture and aseismic creep is of central interest to better understand the mechanics of subduction processes. A Mw 8.2 earthquake occurred on April 1st, 2014 in the Iquique seismic gap of northern Chile. This event was preceded by a long foreshock sequence including a 2-week-long migration of seismicity initiated by a Mw 6.7 earthquake. Repeating earthquakes were found among the foreshock sequence that migrated towards the mainshock hypocenter, suggesting a large-scale slow-slip event on the megathrust preceding the mainshock. The variations of the recurrence times of the repeating earthquakes highlight the diverse seismic and aseismic slip behaviors on different megathrust segments. The repeaters that were active only before the mainshock recurred more often and were distributed in areas of substantial coseismic slip, while repeaters that occurred both before and after the mainshock were in the area complementary to the mainshock rupture. The spatiotemporal distribution of the repeating earthquakes illustrates the essential role of propagating aseismic slip leading up to the mainshock and illuminates the distribution of postseismic afterslip. Various finite fault models indicate that the largest coseismic slip generally occurred down-dip from the foreshock activity and the mainshock hypocenter. Source imaging by teleseismic back-projection indicates an initial down-dip propagation stage followed by a rupture-expansion stage. In the first stage, the finite fault models show an emergent onset of moment rate at low frequency ( 0.5 Hz). This indicates frequency-dependent manifestations of seismic radiation in the low-stress foreshock region. In the second stage, the rupture expands in rich bursts along the rim of a semi-elliptical region with episodes of re-ruptures, suggesting delayed failure of asperities. The high-frequency rupture remains within an area of local high trench-parallel gravity anomaly (TPGA), suggesting the presence of

  7. Resilience of aging populations after devastating earthquake event and its determinants - A case study of the Chi-Chi earthquake in Taiwan

    Science.gov (United States)

    Hung, Chih-Hsuan; Hung, Hung-Chih

    2016-04-01

    1.Background Major portions of urban areas in Asia are highly exposed and vulnerable to devastating earthquakes. Many studies identify ways to reduce earthquake risk by concentrating more on building resilience for the particularly vulnerable populations. By 2020, as the United Nations' warning, many Asian countries would become 'super-aged societies', such as Taiwan. However, local authorities rarely use resilience approach to frame earthquake disaster risk management and land use strategies. The empirically-based research about the resilience of aging populations has also received relatively little attention. Thus, a challenge arisen for decision-makers is how to enhance resilience of aging populations within the context of risk reduction. This study aims to improve the understanding of the resilience of aging populations and its changes over time in the aftermath of a destructive earthquake at the local level. A novel methodology is proposed to assess the resilience of aging populations and to characterize their changes of spatial distribution patterns, as well as to examine their determinants. 2.Methods and data An indicator-based assessment framework is constructed with the goal of identifying composite indicators (including before, during and after a disaster) that could serve as proxies for attributes of the resilience of aging populations. Using the recovery process of the Chi-Chi earthquake struck central Taiwan in 1999 as a case study, we applied a method combined a geographical information system (GIS)-based spatial statistics technique and cluster analysis to test the extent of which the resilience of aging populations is spatially autocorrelated throughout the central Taiwan, and to explain why clustering of resilient areas occurs in specific locations. Furthermore, to scrutinize the affecting factors of resilience, we develop an aging population resilience model (APRM) based on existing resilience theory. Using the APRM, we applied a multivariate

  8. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com [Badan Meteorologi Klimatologi Geofisika, Jl Angkasa I No. 2 Jakarta (Indonesia); Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan [Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2014-03-24

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.

  9. Mechanism of High Frequency Shallow Earthquake Source in Mount Soputan, North Sulawesi

    Directory of Open Access Journals (Sweden)

    Yasa Suparman

    2014-06-01

    Full Text Available DOI: 10.17014/ijog.v6i3.122Moment tensor analysis had been conducted to understand the source mechanism of earthquakes in Soputan Volcano during October - November 2010 period. The record shows shallow earthquakes with frequency about 5 - 9 Hz. Polarity distribution of P-wave first onset indicates that the recorded earthquakes are predominated by earthquakes where almost at all stations have the same direction of P-wave first motions, and earthquakes with upward first motions.In this article, the source mechanism is described as the second derivative of moment tensor, approached with first motion amplitude inversion of P-wave at some seismic stations. The result of moment tensor decomposition are predominated by earthquakes with big percentage in ISO and CLVD component. Focal mechanism shows that the recorded earthquakes have the same strike in northeast-southwest direction with dip about 400 - 600. The sources of the high frequency shallow earthquakes are in the form of tensile-shear cracks or a combination between crack and tensile faulting.

  10. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    International Nuclear Information System (INIS)

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-01-01

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake

  11. Nucleation speed limit on remote fluid-induced earthquakes

    Science.gov (United States)

    Parsons, Tom; Malagnini, Luca; Akinci, Aybige

    2017-01-01

    Earthquakes triggered by other remote seismic events are explained as a response to long-traveling seismic waves that temporarily stress the crust. However, delays of hours or days after seismic waves pass through are reported by several studies, which are difficult to reconcile with the transient stresses imparted by seismic waves. We show that these delays are proportional to magnitude and that nucleation times are best fit to a fluid diffusion process if the governing rupture process involves unlocking a magnitude-dependent critical nucleation zone. It is well established that distant earthquakes can strongly affect the pressure and distribution of crustal pore fluids. Earth’s crust contains hydraulically isolated, pressurized compartments in which fluids are contained within low-permeability walls. We know that strong shaking induced by seismic waves from large earthquakes can change the permeability of rocks. Thus, the boundary of a pressurized compartment may see its permeability rise. Previously confined, overpressurized pore fluids may then diffuse away, infiltrate faults, decrease their strength, and induce earthquakes. Magnitude-dependent delays and critical nucleation zone conclusions can also be applied to human-induced earthquakes. PMID:28845448

  12. Stress Drops of Earthquakes on the Subducting Pacific Plate in the South-East off Hokkaido, Japan

    Science.gov (United States)

    Saito, Y.; Yamada, T.

    2013-12-01

    Large earthquakes have been occurring repeatedly in the South-East of Hokkaido, Japan, where the Pacific Plate subducts beneath the Okhotsk Plate in the north-west direction. For example, the 2003 Tokachi-oki earthquake (Mw8.3 determined by USGS) took place in the region on September 26, 2003. Yamanaka and Kikuchi (2003) analyzed the slip distribution of the earthquake and concluded that the 2003 earthquake had ruptured the deeper half of the fault plane of the 1952 Tokachi-oki earthquake. Miyazaki et al. (2004) reported that a notable afterslip was observed at adjacent areas to the coseismic rupture zone of the 2003 earthquake, which suggests that there would be significant heterogeneities of strength, stress and frictional properties on the surface of the Pacific Plate in the region. In addition, some previous studies suggest that the region with a large slip in large earthquakes permanently have large difference of strength and the dynamic frictional stress level and that it would be able to predict the spatial pattern of slip in the next large earthquake by analyzing the stress drop of small earthquakes (e.g. Allmann and Shearer, 2007 and Yamada et al., 2010). We estimated stress drops of 150 earthquakes (4.2 ≤ M ≤ 5.0), using S-coda waves, or the waveforms from 4.00 to 9.11 seconds after the S wave arrivals, of Hi-net data. The 150 earthquakes were the ones that occurred from June, 2002 to December, 2010 in south-east of Hokkaido, Japan, from 40.5N to 43.5N and from 141.0E to 146.5E. First we selected waveforms of the closest earthquakes with magnitudes between 3.0 and 3.2 to individual 150 earthquakes as empirical Green's functions. We then calculated source spectral ratios of the 150 pairs of interested earthquakes and EGFs by deconvolving the individual S-coda waves. We finally estimated corner frequencies of earthquakes from the spectral ratios by assuming the omega-squared model of Boatwright (1978) and calculated stress drops of the earthquakes by

  13. Finding the magnetic size distribution of magnetic nanoparticles from magnetization measurements via the iterative Kaczmarz algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Daniel, E-mail: frank.wiekhorst@ptb.de; Eberbeck, Dietmar; Steinhoff, Uwe; Wiekhorst, Frank

    2017-06-01

    The characterization of the size distribution of magnetic nanoparticles is an important step for the evaluation of their suitability for many different applications like magnetic hyperthermia, drug targeting or Magnetic Particle Imaging. We present a new method based on the iterative Kaczmarz algorithm that enables the reconstruction of the size distribution from magnetization measurements without a priori knowledge of the distribution form. We show in simulations that the method is capable of very exact reconstructions of a given size distribution and, in that, is highly robust to noise contamination. Moreover, we applied the method on the well characterized FeraSpin™ series and obtained results that were in accordance with literature and boundary conditions based on their synthesis via separation of the original suspension FeraSpin R. It is therefore concluded that this method is a powerful and intuitive tool for reconstructing particle size distributions from magnetization measurements. - Highlights: • A new method for the size distribution fit of magnetic nanoparticles is proposed. • Employed Kaczmarz algorithm does not need a priori input or eigenwert regularization. • The method is highly robust to noise contamination. • Size distributions are reconstructed from simulated and measured magnetization curves.

  14. Theoretical size distribution of fossil taxa: analysis of a null model

    Directory of Open Access Journals (Sweden)

    Hughes Barry D

    2007-03-01

    Full Text Available Abstract Background This article deals with the theoretical size distribution (of number of sub-taxa of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.

  15. Earthquake early warning system using real-time signal processing

    Energy Technology Data Exchange (ETDEWEB)

    Leach, R.R. Jr.; Dowla, F.U.

    1996-02-01

    An earthquake warning system has been developed to provide a time series profile from which vital parameters such as the time until strong shaking begins, the intensity of the shaking, and the duration of the shaking, can be derived. Interaction of different types of ground motion and changes in the elastic properties of geological media throughout the propagation path result in a highly nonlinear function. We use neural networks to model these nonlinearities and develop learning techniques for the analysis of temporal precursors occurring in the emerging earthquake seismic signal. The warning system is designed to analyze the first-arrival from the three components of an earthquake signal and instantaneously provide a profile of impending ground motion, in as little as 0.3 sec after first ground motion is felt at the sensors. For each new data sample, at a rate of 25 samples per second, the complete profile of the earthquake is updated. The profile consists of a magnitude-related estimate as well as an estimate of the envelope of the complete earthquake signal. The envelope provides estimates of damage parameters, such as time until peak ground acceleration (PGA) and duration. The neural network based system is trained using seismogram data from more than 400 earthquakes recorded in southern California. The system has been implemented in hardware using silicon accelerometers and a standard microprocessor. The proposed warning units can be used for site-specific applications, distributed networks, or to enhance existing distributed networks. By producing accurate, and informative warnings, the system has the potential to significantly minimize the hazards of catastrophic ground motion. Detailed system design and performance issues, including error measurement in a simple warning scenario are discussed in detail.

  16. Particle size distribution of main-channel-bed sediments along the upper Mississippi River, USA

    Science.gov (United States)

    Remo, Jonathan; Heine, Ruben A.; Ickes, Brian

    2016-01-01

    In this study, we compared pre-lock-and-dam (ca. 1925) with a modern longitudinal survey of main-channel-bed sediments along a 740-km segment of the upper Mississippi River (UMR) between Davenport, IA, and Cairo, IL. This comparison was undertaken to gain a better understanding of how bed sediments are distributed longitudinally and to assess change since the completion of the UMR lock and dam navigation system and Missouri River dams (i.e., mid-twentieth century). The comparison of the historic and modern longitudinal bed sediment surveys showed similar bed sediment sizes and distributions along the study segment with the majority (> 90%) of bed sediment samples having a median diameter (D50) of fine to coarse sand. The fine tail (≤ D10) of the sediment size distributions was very fine to medium sand, and the coarse tail (≥ D90) of sediment-size distribution was coarse sand to gravel. Coarsest sediments in both surveys were found within or immediately downstream of bedrock-floored reaches. Statistical analysis revealed that the particle-size distributions between the survey samples were statistically identical, suggesting no overall difference in main-channel-bed sediment-size distribution between 1925 and present. This was a surprising result given the magnitude of river engineering undertaken along the study segment over the past ~ 90 years. The absence of substantial differences in main-channel-bed-sediment size suggests that flow competencies within the highly engineered navigation channel today are similar to conditions within the less-engineered historic channel.

  17. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    Science.gov (United States)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  18. Estimation of particle size distribution of nanoparticles from electrical ...

    Indian Academy of Sciences (India)

    ... blockade (CB) phenomena of electrical conduction through atiny nanoparticle. Considering the ZnO nanocomposites to be spherical, Coulomb-blockade model of quantum dot isapplied here. The size distribution of particle is estimated from that model and compared with the results obtainedfrom AFM and XRD analyses.

  19. A new way of telling earthquake stories: MOBEE - the MOBile Earthquake Exhibition

    Science.gov (United States)

    Tataru, Dragos; Toma-Danila, Dragos; Nastase, Eduard

    2016-04-01

    developing particular skills by getting in contact with exhibition elements and researchers. In addition, what makes this exhibition and education tool different from other similar initiatives is the mobile and customizable character. Whether it will be hosted for a period in earth science museums, providing them with the tools and resources to turn their audiences into active advocates or used at public events (like Earth Day, Science kiosk or school events), MOBEE can be customized both in size, in presentation and composition. Thus each experience will be unique, perfectly adapted to the event, telling to real and virtual visitors a story about the Earth, earthquakes and their effects.

  20. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.